00:00:00.001 Started by upstream project "autotest-per-patch" build number 132536 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.097 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.098 The recommended git tool is: git 00:00:00.098 using credential 00000000-0000-0000-0000-000000000002 00:00:00.100 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.168 Fetching changes from the remote Git repository 00:00:00.170 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.229 Using shallow fetch with depth 1 00:00:00.229 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.229 > git --version # timeout=10 00:00:00.285 > git --version # 'git version 2.39.2' 00:00:00.285 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.323 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.323 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.928 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.946 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.960 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.960 > git config core.sparsecheckout # timeout=10 00:00:05.971 > git read-tree -mu HEAD # timeout=10 00:00:05.987 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.009 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.009 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.153 [Pipeline] Start of Pipeline 00:00:06.172 [Pipeline] library 00:00:06.174 Loading library shm_lib@master 00:00:06.174 Library shm_lib@master is cached. Copying from home. 00:00:06.190 [Pipeline] node 00:00:06.197 Running on VM-host-SM0 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.199 [Pipeline] { 00:00:06.207 [Pipeline] catchError 00:00:06.209 [Pipeline] { 00:00:06.221 [Pipeline] wrap 00:00:06.229 [Pipeline] { 00:00:06.235 [Pipeline] stage 00:00:06.237 [Pipeline] { (Prologue) 00:00:06.252 [Pipeline] echo 00:00:06.254 Node: VM-host-SM0 00:00:06.258 [Pipeline] cleanWs 00:00:06.266 [WS-CLEANUP] Deleting project workspace... 00:00:06.266 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.271 [WS-CLEANUP] done 00:00:06.492 [Pipeline] setCustomBuildProperty 00:00:06.582 [Pipeline] httpRequest 00:00:07.382 [Pipeline] echo 00:00:07.383 Sorcerer 10.211.164.101 is alive 00:00:07.392 [Pipeline] retry 00:00:07.393 [Pipeline] { 00:00:07.406 [Pipeline] httpRequest 00:00:07.410 HttpMethod: GET 00:00:07.411 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.411 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.417 Response Code: HTTP/1.1 200 OK 00:00:07.418 Success: Status code 200 is in the accepted range: 200,404 00:00:07.418 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.929 [Pipeline] } 00:00:17.947 [Pipeline] // retry 00:00:17.956 [Pipeline] sh 00:00:18.238 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:18.255 [Pipeline] httpRequest 00:00:18.672 [Pipeline] echo 00:00:18.673 Sorcerer 10.211.164.101 is alive 00:00:18.683 [Pipeline] retry 00:00:18.685 [Pipeline] { 00:00:18.702 [Pipeline] httpRequest 00:00:18.707 HttpMethod: GET 00:00:18.708 URL: http://10.211.164.101/packages/spdk_51a65534eb03c3135976481c3cfdb30720d2ea27.tar.gz 00:00:18.708 Sending request to url: http://10.211.164.101/packages/spdk_51a65534eb03c3135976481c3cfdb30720d2ea27.tar.gz 00:00:18.722 Response Code: HTTP/1.1 200 OK 00:00:18.723 Success: Status code 200 is in the accepted range: 200,404 00:00:18.723 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_51a65534eb03c3135976481c3cfdb30720d2ea27.tar.gz 00:02:05.955 [Pipeline] } 00:02:05.974 [Pipeline] // retry 00:02:05.983 [Pipeline] sh 00:02:06.262 + tar --no-same-owner -xf spdk_51a65534eb03c3135976481c3cfdb30720d2ea27.tar.gz 00:02:09.558 [Pipeline] sh 00:02:09.839 + git -C spdk log --oneline -n5 00:02:09.839 51a65534e bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:09.839 0617ba6b2 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:02:09.839 bb877d8c1 nvmf: Expose DIF type of namespace to host again 00:02:09.839 9f3071c5f nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:02:09.839 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:02:09.859 [Pipeline] writeFile 00:02:09.874 [Pipeline] sh 00:02:10.156 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:10.167 [Pipeline] sh 00:02:10.447 + cat autorun-spdk.conf 00:02:10.447 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.447 SPDK_TEST_NVME=1 00:02:10.447 SPDK_TEST_FTL=1 00:02:10.447 SPDK_TEST_ISAL=1 00:02:10.447 SPDK_RUN_ASAN=1 00:02:10.447 SPDK_RUN_UBSAN=1 00:02:10.447 SPDK_TEST_XNVME=1 00:02:10.447 SPDK_TEST_NVME_FDP=1 00:02:10.447 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.453 RUN_NIGHTLY=0 00:02:10.455 [Pipeline] } 00:02:10.472 [Pipeline] // stage 00:02:10.488 [Pipeline] stage 00:02:10.490 [Pipeline] { (Run VM) 00:02:10.502 [Pipeline] sh 00:02:10.781 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:10.781 + echo 'Start stage prepare_nvme.sh' 00:02:10.781 Start stage prepare_nvme.sh 00:02:10.781 + [[ -n 3 ]] 00:02:10.781 + disk_prefix=ex3 00:02:10.781 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:10.781 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:10.781 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:10.781 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:10.781 ++ SPDK_TEST_NVME=1 00:02:10.781 ++ SPDK_TEST_FTL=1 00:02:10.781 ++ SPDK_TEST_ISAL=1 00:02:10.781 ++ SPDK_RUN_ASAN=1 00:02:10.781 ++ SPDK_RUN_UBSAN=1 00:02:10.781 ++ SPDK_TEST_XNVME=1 00:02:10.781 ++ SPDK_TEST_NVME_FDP=1 00:02:10.781 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:10.781 ++ RUN_NIGHTLY=0 00:02:10.781 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:10.781 + nvme_files=() 00:02:10.781 + declare -A nvme_files 00:02:10.781 + backend_dir=/var/lib/libvirt/images/backends 00:02:10.781 + nvme_files['nvme.img']=5G 00:02:10.781 + nvme_files['nvme-cmb.img']=5G 00:02:10.781 + nvme_files['nvme-multi0.img']=4G 00:02:10.781 + nvme_files['nvme-multi1.img']=4G 00:02:10.781 + nvme_files['nvme-multi2.img']=4G 00:02:10.781 + nvme_files['nvme-openstack.img']=8G 00:02:10.781 + nvme_files['nvme-zns.img']=5G 00:02:10.781 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:10.781 + (( SPDK_TEST_FTL == 1 )) 00:02:10.781 + nvme_files["nvme-ftl.img"]=6G 00:02:10.781 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:10.781 + nvme_files["nvme-fdp.img"]=1G 00:02:10.781 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:10.781 + for nvme in "${!nvme_files[@]}" 00:02:10.781 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:02:10.781 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:10.781 + for nvme in "${!nvme_files[@]}" 00:02:10.781 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-ftl.img -s 6G 00:02:10.781 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:10.781 + for nvme in "${!nvme_files[@]}" 00:02:10.781 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:02:10.781 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:10.781 + for nvme in "${!nvme_files[@]}" 00:02:10.781 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:02:10.781 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:10.781 + for nvme in "${!nvme_files[@]}" 00:02:10.781 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:02:10.781 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:10.781 + for nvme in "${!nvme_files[@]}" 00:02:10.781 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:02:10.781 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:10.781 + for nvme in "${!nvme_files[@]}" 00:02:10.781 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:02:11.038 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:11.038 + for nvme in "${!nvme_files[@]}" 00:02:11.038 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-fdp.img -s 1G 00:02:11.038 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:11.038 + for nvme in "${!nvme_files[@]}" 00:02:11.038 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:02:11.295 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:11.295 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:02:11.295 + echo 'End stage prepare_nvme.sh' 00:02:11.295 End stage prepare_nvme.sh 00:02:11.305 [Pipeline] sh 00:02:11.584 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:11.585 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex3-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:11.585 00:02:11.585 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:11.585 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:11.585 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:11.585 HELP=0 00:02:11.585 DRY_RUN=0 00:02:11.585 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,/var/lib/libvirt/images/backends/ex3-nvme-fdp.img, 00:02:11.585 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:11.585 NVME_AUTO_CREATE=0 00:02:11.585 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,, 00:02:11.585 NVME_CMB=,,,, 00:02:11.585 NVME_PMR=,,,, 00:02:11.585 NVME_ZNS=,,,, 00:02:11.585 NVME_MS=true,,,, 00:02:11.585 NVME_FDP=,,,on, 00:02:11.585 SPDK_VAGRANT_DISTRO=fedora39 00:02:11.585 SPDK_VAGRANT_VMCPU=10 00:02:11.585 SPDK_VAGRANT_VMRAM=12288 00:02:11.585 SPDK_VAGRANT_PROVIDER=libvirt 00:02:11.585 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:11.585 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:11.585 SPDK_OPENSTACK_NETWORK=0 00:02:11.585 VAGRANT_PACKAGE_BOX=0 00:02:11.585 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:11.585 FORCE_DISTRO=true 00:02:11.585 VAGRANT_BOX_VERSION= 00:02:11.585 EXTRA_VAGRANTFILES= 00:02:11.585 NIC_MODEL=e1000 00:02:11.585 00:02:11.585 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:11.585 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:14.867 Bringing machine 'default' up with 'libvirt' provider... 00:02:16.239 ==> default: Creating image (snapshot of base box volume). 00:02:16.239 ==> default: Creating domain with the following settings... 00:02:16.239 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732644230_3a0bf3b683c85c34eed2 00:02:16.239 ==> default: -- Domain type: kvm 00:02:16.239 ==> default: -- Cpus: 10 00:02:16.239 ==> default: -- Feature: acpi 00:02:16.239 ==> default: -- Feature: apic 00:02:16.239 ==> default: -- Feature: pae 00:02:16.239 ==> default: -- Memory: 12288M 00:02:16.239 ==> default: -- Memory Backing: hugepages: 00:02:16.239 ==> default: -- Management MAC: 00:02:16.239 ==> default: -- Loader: 00:02:16.239 ==> default: -- Nvram: 00:02:16.239 ==> default: -- Base box: spdk/fedora39 00:02:16.239 ==> default: -- Storage pool: default 00:02:16.239 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732644230_3a0bf3b683c85c34eed2.img (20G) 00:02:16.239 ==> default: -- Volume Cache: default 00:02:16.239 ==> default: -- Kernel: 00:02:16.239 ==> default: -- Initrd: 00:02:16.239 ==> default: -- Graphics Type: vnc 00:02:16.239 ==> default: -- Graphics Port: -1 00:02:16.239 ==> default: -- Graphics IP: 127.0.0.1 00:02:16.239 ==> default: -- Graphics Password: Not defined 00:02:16.239 ==> default: -- Video Type: cirrus 00:02:16.239 ==> default: -- Video VRAM: 9216 00:02:16.239 ==> default: -- Sound Type: 00:02:16.239 ==> default: -- Keymap: en-us 00:02:16.239 ==> default: -- TPM Path: 00:02:16.239 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:16.239 ==> default: -- Command line args: 00:02:16.239 ==> default: -> value=-device, 00:02:16.239 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:16.239 ==> default: -> value=-drive, 00:02:16.239 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:16.239 ==> default: -> value=-device, 00:02:16.239 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:16.239 ==> default: -> value=-device, 00:02:16.239 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:16.239 ==> default: -> value=-drive, 00:02:16.239 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-1-drive0, 00:02:16.239 ==> default: -> value=-device, 00:02:16.239 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:16.239 ==> default: -> value=-device, 00:02:16.239 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:16.239 ==> default: -> value=-drive, 00:02:16.239 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:16.239 ==> default: -> value=-device, 00:02:16.239 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:16.239 ==> default: -> value=-drive, 00:02:16.239 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:16.239 ==> default: -> value=-device, 00:02:16.239 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:16.239 ==> default: -> value=-drive, 00:02:16.239 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:16.239 ==> default: -> value=-device, 00:02:16.239 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:16.239 ==> default: -> value=-device, 00:02:16.239 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:16.239 ==> default: -> value=-device, 00:02:16.239 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:16.239 ==> default: -> value=-drive, 00:02:16.239 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:16.239 ==> default: -> value=-device, 00:02:16.239 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:16.239 ==> default: Creating shared folders metadata... 00:02:16.239 ==> default: Starting domain. 00:02:18.144 ==> default: Waiting for domain to get an IP address... 00:02:40.099 ==> default: Waiting for SSH to become available... 00:02:40.099 ==> default: Configuring and enabling network interfaces... 00:02:43.453 default: SSH address: 192.168.121.234:22 00:02:43.453 default: SSH username: vagrant 00:02:43.453 default: SSH auth method: private key 00:02:45.356 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:53.465 ==> default: Mounting SSHFS shared folder... 00:02:54.839 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:54.839 ==> default: Checking Mount.. 00:02:55.773 ==> default: Folder Successfully Mounted! 00:02:55.773 ==> default: Running provisioner: file... 00:02:56.709 default: ~/.gitconfig => .gitconfig 00:02:56.990 00:02:56.990 SUCCESS! 00:02:56.990 00:02:56.990 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:56.990 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:56.990 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:56.990 00:02:57.000 [Pipeline] } 00:02:57.016 [Pipeline] // stage 00:02:57.025 [Pipeline] dir 00:02:57.026 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:57.028 [Pipeline] { 00:02:57.040 [Pipeline] catchError 00:02:57.042 [Pipeline] { 00:02:57.057 [Pipeline] sh 00:02:57.340 + vagrant ssh-config --host vagrant 00:02:57.340 + sed -ne /^Host/,$p 00:02:57.340 + tee ssh_conf 00:03:01.571 Host vagrant 00:03:01.571 HostName 192.168.121.234 00:03:01.571 User vagrant 00:03:01.571 Port 22 00:03:01.571 UserKnownHostsFile /dev/null 00:03:01.571 StrictHostKeyChecking no 00:03:01.571 PasswordAuthentication no 00:03:01.571 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:03:01.571 IdentitiesOnly yes 00:03:01.571 LogLevel FATAL 00:03:01.571 ForwardAgent yes 00:03:01.571 ForwardX11 yes 00:03:01.571 00:03:01.586 [Pipeline] withEnv 00:03:01.589 [Pipeline] { 00:03:01.605 [Pipeline] sh 00:03:01.887 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:03:01.887 source /etc/os-release 00:03:01.887 [[ -e /image.version ]] && img=$(< /image.version) 00:03:01.887 # Minimal, systemd-like check. 00:03:01.887 if [[ -e /.dockerenv ]]; then 00:03:01.887 # Clear garbage from the node's name: 00:03:01.887 # agt-er_autotest_547-896 -> autotest_547-896 00:03:01.887 # $HOSTNAME is the actual container id 00:03:01.887 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:03:01.887 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:03:01.887 # We can assume this is a mount from a host where container is running, 00:03:01.887 # so fetch its hostname to easily identify the target swarm worker. 00:03:01.887 container="$(< /etc/hostname) ($agent)" 00:03:01.887 else 00:03:01.887 # Fallback 00:03:01.887 container=$agent 00:03:01.887 fi 00:03:01.887 fi 00:03:01.887 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:03:01.887 00:03:01.899 [Pipeline] } 00:03:01.917 [Pipeline] // withEnv 00:03:01.926 [Pipeline] setCustomBuildProperty 00:03:01.942 [Pipeline] stage 00:03:01.945 [Pipeline] { (Tests) 00:03:01.963 [Pipeline] sh 00:03:02.243 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:03:02.255 [Pipeline] sh 00:03:02.530 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:03:02.801 [Pipeline] timeout 00:03:02.801 Timeout set to expire in 50 min 00:03:02.804 [Pipeline] { 00:03:02.816 [Pipeline] sh 00:03:03.106 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:03:03.671 HEAD is now at 51a65534e bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:03:03.683 [Pipeline] sh 00:03:03.966 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:03:04.237 [Pipeline] sh 00:03:04.516 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:04.791 [Pipeline] sh 00:03:05.069 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:03:05.069 ++ readlink -f spdk_repo 00:03:05.327 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:05.327 + [[ -n /home/vagrant/spdk_repo ]] 00:03:05.328 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:05.328 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:05.328 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:05.328 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:05.328 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:05.328 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:05.328 + cd /home/vagrant/spdk_repo 00:03:05.328 + source /etc/os-release 00:03:05.328 ++ NAME='Fedora Linux' 00:03:05.328 ++ VERSION='39 (Cloud Edition)' 00:03:05.328 ++ ID=fedora 00:03:05.328 ++ VERSION_ID=39 00:03:05.328 ++ VERSION_CODENAME= 00:03:05.328 ++ PLATFORM_ID=platform:f39 00:03:05.328 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:05.328 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:05.328 ++ LOGO=fedora-logo-icon 00:03:05.328 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:05.328 ++ HOME_URL=https://fedoraproject.org/ 00:03:05.328 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:05.328 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:05.328 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:05.328 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:05.328 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:05.328 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:05.328 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:05.328 ++ SUPPORT_END=2024-11-12 00:03:05.328 ++ VARIANT='Cloud Edition' 00:03:05.328 ++ VARIANT_ID=cloud 00:03:05.328 + uname -a 00:03:05.328 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:05.328 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:05.586 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:05.844 Hugepages 00:03:05.844 node hugesize free / total 00:03:05.844 node0 1048576kB 0 / 0 00:03:05.844 node0 2048kB 0 / 0 00:03:05.844 00:03:05.844 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:05.844 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:05.844 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:05.844 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:06.101 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:03:06.101 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:06.101 + rm -f /tmp/spdk-ld-path 00:03:06.101 + source autorun-spdk.conf 00:03:06.101 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:06.101 ++ SPDK_TEST_NVME=1 00:03:06.101 ++ SPDK_TEST_FTL=1 00:03:06.101 ++ SPDK_TEST_ISAL=1 00:03:06.101 ++ SPDK_RUN_ASAN=1 00:03:06.101 ++ SPDK_RUN_UBSAN=1 00:03:06.101 ++ SPDK_TEST_XNVME=1 00:03:06.101 ++ SPDK_TEST_NVME_FDP=1 00:03:06.101 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:06.101 ++ RUN_NIGHTLY=0 00:03:06.101 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:06.101 + [[ -n '' ]] 00:03:06.101 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:06.101 + for M in /var/spdk/build-*-manifest.txt 00:03:06.101 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:06.101 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:06.101 + for M in /var/spdk/build-*-manifest.txt 00:03:06.101 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:06.101 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:06.101 + for M in /var/spdk/build-*-manifest.txt 00:03:06.101 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:06.101 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:06.101 ++ uname 00:03:06.101 + [[ Linux == \L\i\n\u\x ]] 00:03:06.101 + sudo dmesg -T 00:03:06.101 + sudo dmesg --clear 00:03:06.101 + dmesg_pid=5299 00:03:06.101 + sudo dmesg -Tw 00:03:06.101 + [[ Fedora Linux == FreeBSD ]] 00:03:06.101 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:06.101 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:06.101 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:06.101 + [[ -x /usr/src/fio-static/fio ]] 00:03:06.101 + export FIO_BIN=/usr/src/fio-static/fio 00:03:06.101 + FIO_BIN=/usr/src/fio-static/fio 00:03:06.101 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:06.101 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:06.101 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:06.101 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:06.101 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:06.101 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:06.101 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:06.101 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:06.101 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:06.101 18:04:40 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:06.101 18:04:40 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:06.101 18:04:40 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:06.101 18:04:40 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:03:06.101 18:04:40 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:03:06.101 18:04:40 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:03:06.101 18:04:40 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:03:06.101 18:04:40 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:06.101 18:04:40 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:03:06.101 18:04:40 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:03:06.101 18:04:40 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:06.101 18:04:40 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:03:06.101 18:04:40 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:06.101 18:04:40 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:06.359 18:04:40 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:06.359 18:04:40 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:06.359 18:04:40 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:06.359 18:04:40 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:06.359 18:04:40 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:06.359 18:04:40 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:06.359 18:04:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.359 18:04:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.359 18:04:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.359 18:04:40 -- paths/export.sh@5 -- $ export PATH 00:03:06.359 18:04:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:06.359 18:04:40 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:06.359 18:04:40 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:06.359 18:04:40 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732644280.XXXXXX 00:03:06.359 18:04:40 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732644280.2bYKhH 00:03:06.359 18:04:40 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:06.359 18:04:40 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:06.359 18:04:40 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:03:06.359 18:04:40 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:06.359 18:04:40 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:06.359 18:04:40 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:06.359 18:04:40 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:06.359 18:04:40 -- common/autotest_common.sh@10 -- $ set +x 00:03:06.359 18:04:40 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:03:06.359 18:04:40 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:06.359 18:04:40 -- pm/common@17 -- $ local monitor 00:03:06.359 18:04:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.360 18:04:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:06.360 18:04:40 -- pm/common@25 -- $ sleep 1 00:03:06.360 18:04:40 -- pm/common@21 -- $ date +%s 00:03:06.360 18:04:40 -- pm/common@21 -- $ date +%s 00:03:06.360 18:04:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732644280 00:03:06.360 18:04:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732644280 00:03:06.360 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732644280_collect-cpu-load.pm.log 00:03:06.360 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732644280_collect-vmstat.pm.log 00:03:07.293 18:04:41 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:07.293 18:04:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:07.293 18:04:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:07.293 18:04:41 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:07.293 18:04:41 -- spdk/autobuild.sh@16 -- $ date -u 00:03:07.293 Tue Nov 26 06:04:41 PM UTC 2024 00:03:07.293 18:04:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:07.293 v25.01-pre-273-g51a65534e 00:03:07.293 18:04:41 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:07.293 18:04:41 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:07.293 18:04:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:07.293 18:04:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:07.293 18:04:41 -- common/autotest_common.sh@10 -- $ set +x 00:03:07.293 ************************************ 00:03:07.293 START TEST asan 00:03:07.293 ************************************ 00:03:07.293 using asan 00:03:07.293 18:04:41 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:07.293 00:03:07.293 real 0m0.000s 00:03:07.293 user 0m0.000s 00:03:07.293 sys 0m0.000s 00:03:07.293 18:04:41 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:07.293 18:04:41 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:07.293 ************************************ 00:03:07.293 END TEST asan 00:03:07.293 ************************************ 00:03:07.293 18:04:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:07.293 18:04:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:07.293 18:04:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:07.293 18:04:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:07.293 18:04:41 -- common/autotest_common.sh@10 -- $ set +x 00:03:07.293 ************************************ 00:03:07.293 START TEST ubsan 00:03:07.293 ************************************ 00:03:07.293 using ubsan 00:03:07.293 18:04:41 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:07.293 00:03:07.293 real 0m0.000s 00:03:07.293 user 0m0.000s 00:03:07.293 sys 0m0.000s 00:03:07.293 18:04:41 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:07.293 ************************************ 00:03:07.293 END TEST ubsan 00:03:07.293 ************************************ 00:03:07.293 18:04:41 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:07.552 18:04:41 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:07.552 18:04:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:07.552 18:04:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:07.552 18:04:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:07.552 18:04:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:07.552 18:04:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:07.552 18:04:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:07.552 18:04:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:07.552 18:04:41 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:03:07.552 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:07.552 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:08.117 Using 'verbs' RDMA provider 00:03:21.310 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:36.182 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:36.182 Creating mk/config.mk...done. 00:03:36.182 Creating mk/cc.flags.mk...done. 00:03:36.182 Type 'make' to build. 00:03:36.182 18:05:09 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:36.182 18:05:09 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:36.182 18:05:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:36.182 18:05:09 -- common/autotest_common.sh@10 -- $ set +x 00:03:36.182 ************************************ 00:03:36.182 START TEST make 00:03:36.182 ************************************ 00:03:36.182 18:05:09 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:36.182 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:36.183 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:36.183 meson setup builddir \ 00:03:36.183 -Dwith-libaio=enabled \ 00:03:36.183 -Dwith-liburing=enabled \ 00:03:36.183 -Dwith-libvfn=disabled \ 00:03:36.183 -Dwith-spdk=disabled \ 00:03:36.183 -Dexamples=false \ 00:03:36.183 -Dtests=false \ 00:03:36.183 -Dtools=false && \ 00:03:36.183 meson compile -C builddir && \ 00:03:36.183 cd -) 00:03:36.183 make[1]: Nothing to be done for 'all'. 00:03:37.555 The Meson build system 00:03:37.555 Version: 1.5.0 00:03:37.555 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:37.555 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:37.555 Build type: native build 00:03:37.555 Project name: xnvme 00:03:37.555 Project version: 0.7.5 00:03:37.555 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:37.555 C linker for the host machine: cc ld.bfd 2.40-14 00:03:37.555 Host machine cpu family: x86_64 00:03:37.555 Host machine cpu: x86_64 00:03:37.555 Message: host_machine.system: linux 00:03:37.555 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:37.555 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:37.555 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:37.555 Run-time dependency threads found: YES 00:03:37.555 Has header "setupapi.h" : NO 00:03:37.555 Has header "linux/blkzoned.h" : YES 00:03:37.555 Has header "linux/blkzoned.h" : YES (cached) 00:03:37.555 Has header "libaio.h" : YES 00:03:37.555 Library aio found: YES 00:03:37.555 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:37.555 Run-time dependency liburing found: YES 2.2 00:03:37.555 Dependency libvfn skipped: feature with-libvfn disabled 00:03:37.555 Found CMake: /usr/bin/cmake (3.27.7) 00:03:37.555 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:37.555 Subproject spdk : skipped: feature with-spdk disabled 00:03:37.555 Run-time dependency appleframeworks found: NO (tried framework) 00:03:37.555 Run-time dependency appleframeworks found: NO (tried framework) 00:03:37.555 Library rt found: YES 00:03:37.555 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:37.555 Configuring xnvme_config.h using configuration 00:03:37.555 Configuring xnvme.spec using configuration 00:03:37.555 Run-time dependency bash-completion found: YES 2.11 00:03:37.555 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:37.555 Program cp found: YES (/usr/bin/cp) 00:03:37.555 Build targets in project: 3 00:03:37.555 00:03:37.555 xnvme 0.7.5 00:03:37.555 00:03:37.555 Subprojects 00:03:37.555 spdk : NO Feature 'with-spdk' disabled 00:03:37.555 00:03:37.555 User defined options 00:03:37.555 examples : false 00:03:37.555 tests : false 00:03:37.555 tools : false 00:03:37.555 with-libaio : enabled 00:03:37.555 with-liburing: enabled 00:03:37.555 with-libvfn : disabled 00:03:37.555 with-spdk : disabled 00:03:37.555 00:03:37.555 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:38.122 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:38.122 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:38.122 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:38.122 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:38.122 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:38.122 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:38.122 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:38.122 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:38.122 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:38.380 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:38.380 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:38.380 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:38.380 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:38.380 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:38.380 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:38.380 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:38.380 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:38.380 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:38.380 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:38.380 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:38.380 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:38.380 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:38.380 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:38.380 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:38.380 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:38.380 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:38.647 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:38.647 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:38.647 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:38.647 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:38.647 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:38.647 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:38.647 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:38.647 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:38.647 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:38.647 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:38.647 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:38.647 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:38.647 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:38.647 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:38.647 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:38.647 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:38.647 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:38.647 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:38.647 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:38.647 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:38.647 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:38.647 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:38.647 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:38.647 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:38.647 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:38.647 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:38.647 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:38.647 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:38.647 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:38.647 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:38.919 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:38.919 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:38.919 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:38.919 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:38.919 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:38.919 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:38.919 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:38.919 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:38.919 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:38.919 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:38.919 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:38.919 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:38.919 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:38.919 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:38.919 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:39.178 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:39.178 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:39.178 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:39.436 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:39.436 [75/76] Linking static target lib/libxnvme.a 00:03:39.436 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:39.694 INFO: autodetecting backend as ninja 00:03:39.694 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:39.694 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:49.698 The Meson build system 00:03:49.698 Version: 1.5.0 00:03:49.698 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:49.698 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:49.698 Build type: native build 00:03:49.698 Program cat found: YES (/usr/bin/cat) 00:03:49.698 Project name: DPDK 00:03:49.698 Project version: 24.03.0 00:03:49.698 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:49.698 C linker for the host machine: cc ld.bfd 2.40-14 00:03:49.698 Host machine cpu family: x86_64 00:03:49.698 Host machine cpu: x86_64 00:03:49.698 Message: ## Building in Developer Mode ## 00:03:49.698 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:49.698 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:49.698 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:49.698 Program python3 found: YES (/usr/bin/python3) 00:03:49.698 Program cat found: YES (/usr/bin/cat) 00:03:49.698 Compiler for C supports arguments -march=native: YES 00:03:49.698 Checking for size of "void *" : 8 00:03:49.698 Checking for size of "void *" : 8 (cached) 00:03:49.698 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:49.698 Library m found: YES 00:03:49.698 Library numa found: YES 00:03:49.698 Has header "numaif.h" : YES 00:03:49.698 Library fdt found: NO 00:03:49.698 Library execinfo found: NO 00:03:49.698 Has header "execinfo.h" : YES 00:03:49.698 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:49.698 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:49.698 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:49.698 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:49.698 Run-time dependency openssl found: YES 3.1.1 00:03:49.698 Run-time dependency libpcap found: YES 1.10.4 00:03:49.698 Has header "pcap.h" with dependency libpcap: YES 00:03:49.698 Compiler for C supports arguments -Wcast-qual: YES 00:03:49.698 Compiler for C supports arguments -Wdeprecated: YES 00:03:49.698 Compiler for C supports arguments -Wformat: YES 00:03:49.698 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:49.698 Compiler for C supports arguments -Wformat-security: NO 00:03:49.698 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:49.698 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:49.698 Compiler for C supports arguments -Wnested-externs: YES 00:03:49.698 Compiler for C supports arguments -Wold-style-definition: YES 00:03:49.698 Compiler for C supports arguments -Wpointer-arith: YES 00:03:49.698 Compiler for C supports arguments -Wsign-compare: YES 00:03:49.698 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:49.698 Compiler for C supports arguments -Wundef: YES 00:03:49.698 Compiler for C supports arguments -Wwrite-strings: YES 00:03:49.698 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:49.698 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:49.698 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:49.698 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:49.698 Program objdump found: YES (/usr/bin/objdump) 00:03:49.698 Compiler for C supports arguments -mavx512f: YES 00:03:49.698 Checking if "AVX512 checking" compiles: YES 00:03:49.698 Fetching value of define "__SSE4_2__" : 1 00:03:49.698 Fetching value of define "__AES__" : 1 00:03:49.698 Fetching value of define "__AVX__" : 1 00:03:49.698 Fetching value of define "__AVX2__" : 1 00:03:49.698 Fetching value of define "__AVX512BW__" : (undefined) 00:03:49.698 Fetching value of define "__AVX512CD__" : (undefined) 00:03:49.698 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:49.698 Fetching value of define "__AVX512F__" : (undefined) 00:03:49.698 Fetching value of define "__AVX512VL__" : (undefined) 00:03:49.698 Fetching value of define "__PCLMUL__" : 1 00:03:49.698 Fetching value of define "__RDRND__" : 1 00:03:49.698 Fetching value of define "__RDSEED__" : 1 00:03:49.698 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:49.698 Fetching value of define "__znver1__" : (undefined) 00:03:49.698 Fetching value of define "__znver2__" : (undefined) 00:03:49.698 Fetching value of define "__znver3__" : (undefined) 00:03:49.698 Fetching value of define "__znver4__" : (undefined) 00:03:49.698 Library asan found: YES 00:03:49.698 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:49.698 Message: lib/log: Defining dependency "log" 00:03:49.698 Message: lib/kvargs: Defining dependency "kvargs" 00:03:49.698 Message: lib/telemetry: Defining dependency "telemetry" 00:03:49.698 Library rt found: YES 00:03:49.698 Checking for function "getentropy" : NO 00:03:49.698 Message: lib/eal: Defining dependency "eal" 00:03:49.698 Message: lib/ring: Defining dependency "ring" 00:03:49.698 Message: lib/rcu: Defining dependency "rcu" 00:03:49.698 Message: lib/mempool: Defining dependency "mempool" 00:03:49.698 Message: lib/mbuf: Defining dependency "mbuf" 00:03:49.698 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:49.698 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:49.698 Compiler for C supports arguments -mpclmul: YES 00:03:49.698 Compiler for C supports arguments -maes: YES 00:03:49.698 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:49.698 Compiler for C supports arguments -mavx512bw: YES 00:03:49.698 Compiler for C supports arguments -mavx512dq: YES 00:03:49.698 Compiler for C supports arguments -mavx512vl: YES 00:03:49.698 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:49.698 Compiler for C supports arguments -mavx2: YES 00:03:49.698 Compiler for C supports arguments -mavx: YES 00:03:49.698 Message: lib/net: Defining dependency "net" 00:03:49.698 Message: lib/meter: Defining dependency "meter" 00:03:49.698 Message: lib/ethdev: Defining dependency "ethdev" 00:03:49.698 Message: lib/pci: Defining dependency "pci" 00:03:49.698 Message: lib/cmdline: Defining dependency "cmdline" 00:03:49.698 Message: lib/hash: Defining dependency "hash" 00:03:49.698 Message: lib/timer: Defining dependency "timer" 00:03:49.698 Message: lib/compressdev: Defining dependency "compressdev" 00:03:49.698 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:49.698 Message: lib/dmadev: Defining dependency "dmadev" 00:03:49.698 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:49.698 Message: lib/power: Defining dependency "power" 00:03:49.698 Message: lib/reorder: Defining dependency "reorder" 00:03:49.698 Message: lib/security: Defining dependency "security" 00:03:49.698 Has header "linux/userfaultfd.h" : YES 00:03:49.698 Has header "linux/vduse.h" : YES 00:03:49.698 Message: lib/vhost: Defining dependency "vhost" 00:03:49.698 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:49.699 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:49.699 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:49.699 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:49.699 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:49.699 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:49.699 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:49.699 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:49.699 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:49.699 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:49.699 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:49.699 Configuring doxy-api-html.conf using configuration 00:03:49.699 Configuring doxy-api-man.conf using configuration 00:03:49.699 Program mandb found: YES (/usr/bin/mandb) 00:03:49.699 Program sphinx-build found: NO 00:03:49.699 Configuring rte_build_config.h using configuration 00:03:49.699 Message: 00:03:49.699 ================= 00:03:49.699 Applications Enabled 00:03:49.699 ================= 00:03:49.699 00:03:49.699 apps: 00:03:49.699 00:03:49.699 00:03:49.699 Message: 00:03:49.699 ================= 00:03:49.699 Libraries Enabled 00:03:49.699 ================= 00:03:49.699 00:03:49.699 libs: 00:03:49.699 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:49.699 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:49.699 cryptodev, dmadev, power, reorder, security, vhost, 00:03:49.699 00:03:49.699 Message: 00:03:49.699 =============== 00:03:49.699 Drivers Enabled 00:03:49.699 =============== 00:03:49.699 00:03:49.699 common: 00:03:49.699 00:03:49.699 bus: 00:03:49.699 pci, vdev, 00:03:49.699 mempool: 00:03:49.699 ring, 00:03:49.699 dma: 00:03:49.699 00:03:49.699 net: 00:03:49.699 00:03:49.699 crypto: 00:03:49.699 00:03:49.699 compress: 00:03:49.699 00:03:49.699 vdpa: 00:03:49.699 00:03:49.699 00:03:49.699 Message: 00:03:49.699 ================= 00:03:49.699 Content Skipped 00:03:49.699 ================= 00:03:49.699 00:03:49.699 apps: 00:03:49.699 dumpcap: explicitly disabled via build config 00:03:49.699 graph: explicitly disabled via build config 00:03:49.699 pdump: explicitly disabled via build config 00:03:49.699 proc-info: explicitly disabled via build config 00:03:49.699 test-acl: explicitly disabled via build config 00:03:49.699 test-bbdev: explicitly disabled via build config 00:03:49.699 test-cmdline: explicitly disabled via build config 00:03:49.699 test-compress-perf: explicitly disabled via build config 00:03:49.699 test-crypto-perf: explicitly disabled via build config 00:03:49.699 test-dma-perf: explicitly disabled via build config 00:03:49.699 test-eventdev: explicitly disabled via build config 00:03:49.699 test-fib: explicitly disabled via build config 00:03:49.699 test-flow-perf: explicitly disabled via build config 00:03:49.699 test-gpudev: explicitly disabled via build config 00:03:49.699 test-mldev: explicitly disabled via build config 00:03:49.699 test-pipeline: explicitly disabled via build config 00:03:49.699 test-pmd: explicitly disabled via build config 00:03:49.699 test-regex: explicitly disabled via build config 00:03:49.699 test-sad: explicitly disabled via build config 00:03:49.699 test-security-perf: explicitly disabled via build config 00:03:49.699 00:03:49.699 libs: 00:03:49.699 argparse: explicitly disabled via build config 00:03:49.699 metrics: explicitly disabled via build config 00:03:49.699 acl: explicitly disabled via build config 00:03:49.699 bbdev: explicitly disabled via build config 00:03:49.699 bitratestats: explicitly disabled via build config 00:03:49.699 bpf: explicitly disabled via build config 00:03:49.699 cfgfile: explicitly disabled via build config 00:03:49.699 distributor: explicitly disabled via build config 00:03:49.699 efd: explicitly disabled via build config 00:03:49.699 eventdev: explicitly disabled via build config 00:03:49.699 dispatcher: explicitly disabled via build config 00:03:49.699 gpudev: explicitly disabled via build config 00:03:49.699 gro: explicitly disabled via build config 00:03:49.699 gso: explicitly disabled via build config 00:03:49.699 ip_frag: explicitly disabled via build config 00:03:49.699 jobstats: explicitly disabled via build config 00:03:49.699 latencystats: explicitly disabled via build config 00:03:49.699 lpm: explicitly disabled via build config 00:03:49.699 member: explicitly disabled via build config 00:03:49.699 pcapng: explicitly disabled via build config 00:03:49.699 rawdev: explicitly disabled via build config 00:03:49.699 regexdev: explicitly disabled via build config 00:03:49.699 mldev: explicitly disabled via build config 00:03:49.699 rib: explicitly disabled via build config 00:03:49.699 sched: explicitly disabled via build config 00:03:49.699 stack: explicitly disabled via build config 00:03:49.699 ipsec: explicitly disabled via build config 00:03:49.699 pdcp: explicitly disabled via build config 00:03:49.699 fib: explicitly disabled via build config 00:03:49.699 port: explicitly disabled via build config 00:03:49.699 pdump: explicitly disabled via build config 00:03:49.699 table: explicitly disabled via build config 00:03:49.699 pipeline: explicitly disabled via build config 00:03:49.699 graph: explicitly disabled via build config 00:03:49.699 node: explicitly disabled via build config 00:03:49.699 00:03:49.699 drivers: 00:03:49.699 common/cpt: not in enabled drivers build config 00:03:49.699 common/dpaax: not in enabled drivers build config 00:03:49.699 common/iavf: not in enabled drivers build config 00:03:49.699 common/idpf: not in enabled drivers build config 00:03:49.699 common/ionic: not in enabled drivers build config 00:03:49.699 common/mvep: not in enabled drivers build config 00:03:49.699 common/octeontx: not in enabled drivers build config 00:03:49.699 bus/auxiliary: not in enabled drivers build config 00:03:49.699 bus/cdx: not in enabled drivers build config 00:03:49.699 bus/dpaa: not in enabled drivers build config 00:03:49.699 bus/fslmc: not in enabled drivers build config 00:03:49.699 bus/ifpga: not in enabled drivers build config 00:03:49.699 bus/platform: not in enabled drivers build config 00:03:49.699 bus/uacce: not in enabled drivers build config 00:03:49.699 bus/vmbus: not in enabled drivers build config 00:03:49.699 common/cnxk: not in enabled drivers build config 00:03:49.699 common/mlx5: not in enabled drivers build config 00:03:49.699 common/nfp: not in enabled drivers build config 00:03:49.699 common/nitrox: not in enabled drivers build config 00:03:49.699 common/qat: not in enabled drivers build config 00:03:49.699 common/sfc_efx: not in enabled drivers build config 00:03:49.699 mempool/bucket: not in enabled drivers build config 00:03:49.699 mempool/cnxk: not in enabled drivers build config 00:03:49.699 mempool/dpaa: not in enabled drivers build config 00:03:49.699 mempool/dpaa2: not in enabled drivers build config 00:03:49.699 mempool/octeontx: not in enabled drivers build config 00:03:49.699 mempool/stack: not in enabled drivers build config 00:03:49.699 dma/cnxk: not in enabled drivers build config 00:03:49.699 dma/dpaa: not in enabled drivers build config 00:03:49.699 dma/dpaa2: not in enabled drivers build config 00:03:49.699 dma/hisilicon: not in enabled drivers build config 00:03:49.699 dma/idxd: not in enabled drivers build config 00:03:49.699 dma/ioat: not in enabled drivers build config 00:03:49.699 dma/skeleton: not in enabled drivers build config 00:03:49.699 net/af_packet: not in enabled drivers build config 00:03:49.699 net/af_xdp: not in enabled drivers build config 00:03:49.699 net/ark: not in enabled drivers build config 00:03:49.699 net/atlantic: not in enabled drivers build config 00:03:49.699 net/avp: not in enabled drivers build config 00:03:49.699 net/axgbe: not in enabled drivers build config 00:03:49.699 net/bnx2x: not in enabled drivers build config 00:03:49.699 net/bnxt: not in enabled drivers build config 00:03:49.699 net/bonding: not in enabled drivers build config 00:03:49.699 net/cnxk: not in enabled drivers build config 00:03:49.699 net/cpfl: not in enabled drivers build config 00:03:49.699 net/cxgbe: not in enabled drivers build config 00:03:49.699 net/dpaa: not in enabled drivers build config 00:03:49.699 net/dpaa2: not in enabled drivers build config 00:03:49.699 net/e1000: not in enabled drivers build config 00:03:49.699 net/ena: not in enabled drivers build config 00:03:49.699 net/enetc: not in enabled drivers build config 00:03:49.699 net/enetfec: not in enabled drivers build config 00:03:49.699 net/enic: not in enabled drivers build config 00:03:49.699 net/failsafe: not in enabled drivers build config 00:03:49.699 net/fm10k: not in enabled drivers build config 00:03:49.699 net/gve: not in enabled drivers build config 00:03:49.699 net/hinic: not in enabled drivers build config 00:03:49.699 net/hns3: not in enabled drivers build config 00:03:49.699 net/i40e: not in enabled drivers build config 00:03:49.699 net/iavf: not in enabled drivers build config 00:03:49.699 net/ice: not in enabled drivers build config 00:03:49.699 net/idpf: not in enabled drivers build config 00:03:49.699 net/igc: not in enabled drivers build config 00:03:49.699 net/ionic: not in enabled drivers build config 00:03:49.699 net/ipn3ke: not in enabled drivers build config 00:03:49.699 net/ixgbe: not in enabled drivers build config 00:03:49.699 net/mana: not in enabled drivers build config 00:03:49.699 net/memif: not in enabled drivers build config 00:03:49.699 net/mlx4: not in enabled drivers build config 00:03:49.699 net/mlx5: not in enabled drivers build config 00:03:49.699 net/mvneta: not in enabled drivers build config 00:03:49.699 net/mvpp2: not in enabled drivers build config 00:03:49.699 net/netvsc: not in enabled drivers build config 00:03:49.699 net/nfb: not in enabled drivers build config 00:03:49.699 net/nfp: not in enabled drivers build config 00:03:49.699 net/ngbe: not in enabled drivers build config 00:03:49.699 net/null: not in enabled drivers build config 00:03:49.699 net/octeontx: not in enabled drivers build config 00:03:49.699 net/octeon_ep: not in enabled drivers build config 00:03:49.699 net/pcap: not in enabled drivers build config 00:03:49.699 net/pfe: not in enabled drivers build config 00:03:49.699 net/qede: not in enabled drivers build config 00:03:49.699 net/ring: not in enabled drivers build config 00:03:49.699 net/sfc: not in enabled drivers build config 00:03:49.699 net/softnic: not in enabled drivers build config 00:03:49.699 net/tap: not in enabled drivers build config 00:03:49.699 net/thunderx: not in enabled drivers build config 00:03:49.699 net/txgbe: not in enabled drivers build config 00:03:49.699 net/vdev_netvsc: not in enabled drivers build config 00:03:49.699 net/vhost: not in enabled drivers build config 00:03:49.699 net/virtio: not in enabled drivers build config 00:03:49.699 net/vmxnet3: not in enabled drivers build config 00:03:49.699 raw/*: missing internal dependency, "rawdev" 00:03:49.699 crypto/armv8: not in enabled drivers build config 00:03:49.699 crypto/bcmfs: not in enabled drivers build config 00:03:49.699 crypto/caam_jr: not in enabled drivers build config 00:03:49.699 crypto/ccp: not in enabled drivers build config 00:03:49.699 crypto/cnxk: not in enabled drivers build config 00:03:49.699 crypto/dpaa_sec: not in enabled drivers build config 00:03:49.699 crypto/dpaa2_sec: not in enabled drivers build config 00:03:49.699 crypto/ipsec_mb: not in enabled drivers build config 00:03:49.699 crypto/mlx5: not in enabled drivers build config 00:03:49.699 crypto/mvsam: not in enabled drivers build config 00:03:49.699 crypto/nitrox: not in enabled drivers build config 00:03:49.699 crypto/null: not in enabled drivers build config 00:03:49.699 crypto/octeontx: not in enabled drivers build config 00:03:49.699 crypto/openssl: not in enabled drivers build config 00:03:49.699 crypto/scheduler: not in enabled drivers build config 00:03:49.699 crypto/uadk: not in enabled drivers build config 00:03:49.699 crypto/virtio: not in enabled drivers build config 00:03:49.699 compress/isal: not in enabled drivers build config 00:03:49.700 compress/mlx5: not in enabled drivers build config 00:03:49.700 compress/nitrox: not in enabled drivers build config 00:03:49.700 compress/octeontx: not in enabled drivers build config 00:03:49.700 compress/zlib: not in enabled drivers build config 00:03:49.700 regex/*: missing internal dependency, "regexdev" 00:03:49.700 ml/*: missing internal dependency, "mldev" 00:03:49.700 vdpa/ifc: not in enabled drivers build config 00:03:49.700 vdpa/mlx5: not in enabled drivers build config 00:03:49.700 vdpa/nfp: not in enabled drivers build config 00:03:49.700 vdpa/sfc: not in enabled drivers build config 00:03:49.700 event/*: missing internal dependency, "eventdev" 00:03:49.700 baseband/*: missing internal dependency, "bbdev" 00:03:49.700 gpu/*: missing internal dependency, "gpudev" 00:03:49.700 00:03:49.700 00:03:49.700 Build targets in project: 85 00:03:49.700 00:03:49.700 DPDK 24.03.0 00:03:49.700 00:03:49.700 User defined options 00:03:49.700 buildtype : debug 00:03:49.700 default_library : shared 00:03:49.700 libdir : lib 00:03:49.700 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:49.700 b_sanitize : address 00:03:49.700 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:49.700 c_link_args : 00:03:49.700 cpu_instruction_set: native 00:03:49.700 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:49.700 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:49.700 enable_docs : false 00:03:49.700 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:49.700 enable_kmods : false 00:03:49.700 max_lcores : 128 00:03:49.700 tests : false 00:03:49.700 00:03:49.700 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:49.700 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:49.700 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:49.700 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:49.700 [3/268] Linking static target lib/librte_kvargs.a 00:03:49.700 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:49.700 [5/268] Linking static target lib/librte_log.a 00:03:49.700 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:49.957 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.957 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:50.215 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:50.215 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:50.215 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:50.215 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:50.215 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:50.215 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:50.473 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:50.473 [16/268] Linking static target lib/librte_telemetry.a 00:03:50.473 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:50.473 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.473 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:50.473 [20/268] Linking target lib/librte_log.so.24.1 00:03:50.731 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:50.990 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:50.990 [23/268] Linking target lib/librte_kvargs.so.24.1 00:03:50.990 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:51.248 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:51.248 [26/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:51.248 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:51.248 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:51.248 [29/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.248 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:51.248 [31/268] Linking target lib/librte_telemetry.so.24.1 00:03:51.248 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:51.248 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:51.506 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:51.506 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:51.764 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:52.022 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:52.022 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:52.022 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:52.022 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:52.280 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:52.280 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:52.280 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:52.280 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:52.539 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:52.539 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:52.539 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:52.539 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:52.798 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:52.798 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:52.798 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:53.057 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:53.315 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:53.315 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:53.316 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:53.574 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:53.574 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:53.574 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:53.574 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:53.574 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:53.574 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:53.832 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:53.832 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:54.090 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:54.349 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:54.349 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:54.349 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:54.349 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:54.349 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:54.611 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:54.611 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:54.611 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:54.880 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:54.880 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:54.880 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:54.880 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:54.880 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:54.880 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:55.139 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:55.139 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:55.139 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:55.139 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:55.398 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:55.656 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:55.656 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:55.656 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:55.656 [87/268] Linking static target lib/librte_eal.a 00:03:55.656 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:55.656 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:55.656 [90/268] Linking static target lib/librte_rcu.a 00:03:55.656 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:55.913 [92/268] Linking static target lib/librte_mempool.a 00:03:55.913 [93/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:55.913 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:55.913 [95/268] Linking static target lib/librte_ring.a 00:03:56.170 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:56.428 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:56.428 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.428 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:56.428 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:56.428 [101/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:56.687 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:56.945 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:56.945 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:56.945 [105/268] Linking static target lib/librte_mbuf.a 00:03:56.945 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:56.945 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:57.204 [108/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.204 [109/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:57.204 [110/268] Linking static target lib/librte_meter.a 00:03:57.204 [111/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:57.204 [112/268] Linking static target lib/librte_net.a 00:03:57.463 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.721 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:57.721 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:57.721 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:57.721 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:57.721 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:57.979 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:58.238 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:58.820 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:58.820 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:58.820 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:59.079 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:59.079 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:59.079 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:59.079 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:59.079 [128/268] Linking static target lib/librte_pci.a 00:03:59.079 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:59.338 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:59.338 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:59.338 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:59.338 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:59.596 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:59.596 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:59.596 [136/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.596 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:59.596 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:59.596 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:59.855 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:59.855 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:59.855 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:59.855 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:59.855 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:59.855 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:00.114 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:00.114 [147/268] Linking static target lib/librte_cmdline.a 00:04:00.114 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:00.372 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:00.372 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:00.935 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:00.935 [152/268] Linking static target lib/librte_timer.a 00:04:00.935 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:00.935 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:00.935 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:01.501 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:01.501 [157/268] Linking static target lib/librte_hash.a 00:04:01.501 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:01.501 [159/268] Linking static target lib/librte_ethdev.a 00:04:01.501 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:01.501 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.501 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:01.757 [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:01.757 [164/268] Linking static target lib/librte_compressdev.a 00:04:01.757 [165/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:01.757 [166/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:02.014 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:02.014 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:02.014 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:02.014 [170/268] Linking static target lib/librte_dmadev.a 00:04:02.273 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:02.273 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:02.530 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:02.789 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:02.789 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.789 [176/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:02.789 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:03.047 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:03.304 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:03.304 [180/268] Linking static target lib/librte_cryptodev.a 00:04:03.304 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:03.304 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:03.304 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:03.304 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:03.562 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:03.562 [186/268] Linking static target lib/librte_power.a 00:04:03.822 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:03.822 [188/268] Linking static target lib/librte_reorder.a 00:04:04.081 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:04.081 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:04.081 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:04.081 [192/268] Linking static target lib/librte_security.a 00:04:04.340 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:04.600 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:04.600 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:04.858 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.117 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.117 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:05.376 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:05.654 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:05.654 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:05.654 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:05.654 [203/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:05.921 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:05.922 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:06.180 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:06.438 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:06.438 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:06.438 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:06.438 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:06.438 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:06.697 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:06.697 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:06.697 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:06.697 [215/268] Linking static target drivers/librte_bus_vdev.a 00:04:06.956 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:06.956 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:06.956 [218/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:06.956 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:06.956 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:06.956 [221/268] Linking static target drivers/librte_bus_pci.a 00:04:06.956 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:07.215 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:07.215 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:07.215 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:07.215 [226/268] Linking static target drivers/librte_mempool_ring.a 00:04:07.474 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.411 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:08.411 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:08.411 [230/268] Linking target lib/librte_eal.so.24.1 00:04:08.669 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:08.669 [232/268] Linking target lib/librte_ring.so.24.1 00:04:08.669 [233/268] Linking target lib/librte_meter.so.24.1 00:04:08.669 [234/268] Linking target lib/librte_timer.so.24.1 00:04:08.669 [235/268] Linking target lib/librte_pci.so.24.1 00:04:08.669 [236/268] Linking target lib/librte_dmadev.so.24.1 00:04:08.669 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:08.669 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:08.669 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:08.669 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:08.992 [241/268] Linking target lib/librte_rcu.so.24.1 00:04:08.992 [242/268] Linking target lib/librte_mempool.so.24.1 00:04:08.992 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:08.992 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:08.992 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:08.992 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:08.992 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:08.992 [248/268] Linking target lib/librte_mbuf.so.24.1 00:04:08.992 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:09.252 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:09.252 [251/268] Linking target lib/librte_cryptodev.so.24.1 00:04:09.252 [252/268] Linking target lib/librte_compressdev.so.24.1 00:04:09.252 [253/268] Linking target lib/librte_net.so.24.1 00:04:09.252 [254/268] Linking target lib/librte_reorder.so.24.1 00:04:09.511 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:09.511 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:09.511 [257/268] Linking target lib/librte_cmdline.so.24.1 00:04:09.511 [258/268] Linking target lib/librte_security.so.24.1 00:04:09.511 [259/268] Linking target lib/librte_hash.so.24.1 00:04:09.771 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:10.030 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:10.030 [262/268] Linking target lib/librte_ethdev.so.24.1 00:04:10.288 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:10.288 [264/268] Linking target lib/librte_power.so.24.1 00:04:12.818 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:12.818 [266/268] Linking static target lib/librte_vhost.a 00:04:14.192 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:14.192 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:14.192 INFO: autodetecting backend as ninja 00:04:14.192 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:36.115 CC lib/log/log.o 00:04:36.115 CC lib/log/log_flags.o 00:04:36.115 CC lib/log/log_deprecated.o 00:04:36.115 CC lib/ut/ut.o 00:04:36.115 CC lib/ut_mock/mock.o 00:04:36.115 LIB libspdk_ut.a 00:04:36.115 SO libspdk_ut.so.2.0 00:04:36.115 LIB libspdk_log.a 00:04:36.115 LIB libspdk_ut_mock.a 00:04:36.374 SO libspdk_ut_mock.so.6.0 00:04:36.374 SO libspdk_log.so.7.1 00:04:36.374 SYMLINK libspdk_ut.so 00:04:36.374 SYMLINK libspdk_ut_mock.so 00:04:36.374 SYMLINK libspdk_log.so 00:04:36.633 CXX lib/trace_parser/trace.o 00:04:36.633 CC lib/dma/dma.o 00:04:36.633 CC lib/ioat/ioat.o 00:04:36.633 CC lib/util/base64.o 00:04:36.633 CC lib/util/cpuset.o 00:04:36.633 CC lib/util/bit_array.o 00:04:36.633 CC lib/util/crc16.o 00:04:36.633 CC lib/util/crc32.o 00:04:36.633 CC lib/util/crc32c.o 00:04:36.633 CC lib/vfio_user/host/vfio_user_pci.o 00:04:36.633 CC lib/util/crc32_ieee.o 00:04:36.633 CC lib/util/crc64.o 00:04:36.891 CC lib/util/dif.o 00:04:36.891 LIB libspdk_dma.a 00:04:36.891 CC lib/util/fd.o 00:04:36.891 SO libspdk_dma.so.5.0 00:04:36.891 CC lib/util/fd_group.o 00:04:36.891 CC lib/util/file.o 00:04:36.891 SYMLINK libspdk_dma.so 00:04:36.891 CC lib/util/hexlify.o 00:04:36.891 CC lib/util/iov.o 00:04:36.891 LIB libspdk_ioat.a 00:04:36.891 CC lib/util/math.o 00:04:36.891 SO libspdk_ioat.so.7.0 00:04:36.891 CC lib/vfio_user/host/vfio_user.o 00:04:36.891 SYMLINK libspdk_ioat.so 00:04:36.891 CC lib/util/net.o 00:04:37.148 CC lib/util/pipe.o 00:04:37.148 CC lib/util/strerror_tls.o 00:04:37.148 CC lib/util/string.o 00:04:37.148 CC lib/util/uuid.o 00:04:37.148 CC lib/util/xor.o 00:04:37.148 CC lib/util/zipf.o 00:04:37.148 CC lib/util/md5.o 00:04:37.148 LIB libspdk_vfio_user.a 00:04:37.148 SO libspdk_vfio_user.so.5.0 00:04:37.406 SYMLINK libspdk_vfio_user.so 00:04:37.665 LIB libspdk_util.a 00:04:37.665 SO libspdk_util.so.10.1 00:04:37.665 LIB libspdk_trace_parser.a 00:04:37.924 SO libspdk_trace_parser.so.6.0 00:04:37.924 SYMLINK libspdk_util.so 00:04:37.924 SYMLINK libspdk_trace_parser.so 00:04:37.924 CC lib/idxd/idxd.o 00:04:37.924 CC lib/idxd/idxd_user.o 00:04:37.924 CC lib/idxd/idxd_kernel.o 00:04:37.924 CC lib/rdma_utils/rdma_utils.o 00:04:37.924 CC lib/env_dpdk/env.o 00:04:37.924 CC lib/env_dpdk/memory.o 00:04:38.181 CC lib/conf/conf.o 00:04:38.181 CC lib/env_dpdk/pci.o 00:04:38.181 CC lib/json/json_parse.o 00:04:38.181 CC lib/vmd/vmd.o 00:04:38.181 CC lib/vmd/led.o 00:04:38.440 LIB libspdk_conf.a 00:04:38.440 CC lib/env_dpdk/init.o 00:04:38.440 SO libspdk_conf.so.6.0 00:04:38.440 LIB libspdk_rdma_utils.a 00:04:38.440 SO libspdk_rdma_utils.so.1.0 00:04:38.440 CC lib/json/json_util.o 00:04:38.440 CC lib/json/json_write.o 00:04:38.440 SYMLINK libspdk_conf.so 00:04:38.440 CC lib/env_dpdk/threads.o 00:04:38.440 SYMLINK libspdk_rdma_utils.so 00:04:38.440 CC lib/env_dpdk/pci_ioat.o 00:04:38.440 CC lib/env_dpdk/pci_virtio.o 00:04:38.698 CC lib/env_dpdk/pci_vmd.o 00:04:38.698 CC lib/env_dpdk/pci_idxd.o 00:04:38.698 CC lib/env_dpdk/pci_event.o 00:04:38.698 CC lib/env_dpdk/sigbus_handler.o 00:04:38.698 CC lib/env_dpdk/pci_dpdk.o 00:04:38.698 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:38.698 LIB libspdk_json.a 00:04:38.698 SO libspdk_json.so.6.0 00:04:38.955 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:38.955 LIB libspdk_idxd.a 00:04:38.956 SYMLINK libspdk_json.so 00:04:38.956 LIB libspdk_vmd.a 00:04:38.956 SO libspdk_idxd.so.12.1 00:04:38.956 CC lib/rdma_provider/common.o 00:04:38.956 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:38.956 SO libspdk_vmd.so.6.0 00:04:38.956 SYMLINK libspdk_idxd.so 00:04:38.956 SYMLINK libspdk_vmd.so 00:04:39.214 CC lib/jsonrpc/jsonrpc_server.o 00:04:39.214 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:39.214 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:39.214 CC lib/jsonrpc/jsonrpc_client.o 00:04:39.214 LIB libspdk_rdma_provider.a 00:04:39.471 SO libspdk_rdma_provider.so.7.0 00:04:39.471 SYMLINK libspdk_rdma_provider.so 00:04:39.471 LIB libspdk_jsonrpc.a 00:04:39.471 SO libspdk_jsonrpc.so.6.0 00:04:39.471 SYMLINK libspdk_jsonrpc.so 00:04:39.729 CC lib/rpc/rpc.o 00:04:39.987 LIB libspdk_env_dpdk.a 00:04:39.987 SO libspdk_env_dpdk.so.15.1 00:04:39.987 LIB libspdk_rpc.a 00:04:39.987 SO libspdk_rpc.so.6.0 00:04:40.245 SYMLINK libspdk_rpc.so 00:04:40.245 SYMLINK libspdk_env_dpdk.so 00:04:40.503 CC lib/keyring/keyring.o 00:04:40.503 CC lib/keyring/keyring_rpc.o 00:04:40.503 CC lib/trace/trace.o 00:04:40.503 CC lib/trace/trace_rpc.o 00:04:40.503 CC lib/trace/trace_flags.o 00:04:40.503 CC lib/notify/notify.o 00:04:40.503 CC lib/notify/notify_rpc.o 00:04:40.503 LIB libspdk_notify.a 00:04:40.761 SO libspdk_notify.so.6.0 00:04:40.761 LIB libspdk_keyring.a 00:04:40.761 SYMLINK libspdk_notify.so 00:04:40.761 SO libspdk_keyring.so.2.0 00:04:40.761 LIB libspdk_trace.a 00:04:40.761 SYMLINK libspdk_keyring.so 00:04:40.761 SO libspdk_trace.so.11.0 00:04:40.761 SYMLINK libspdk_trace.so 00:04:41.018 CC lib/thread/thread.o 00:04:41.018 CC lib/thread/iobuf.o 00:04:41.018 CC lib/sock/sock.o 00:04:41.018 CC lib/sock/sock_rpc.o 00:04:41.582 LIB libspdk_sock.a 00:04:41.839 SO libspdk_sock.so.10.0 00:04:41.839 SYMLINK libspdk_sock.so 00:04:42.097 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:42.097 CC lib/nvme/nvme_ctrlr.o 00:04:42.097 CC lib/nvme/nvme_fabric.o 00:04:42.097 CC lib/nvme/nvme_ns_cmd.o 00:04:42.097 CC lib/nvme/nvme_ns.o 00:04:42.097 CC lib/nvme/nvme_pcie.o 00:04:42.097 CC lib/nvme/nvme_pcie_common.o 00:04:42.097 CC lib/nvme/nvme_qpair.o 00:04:42.097 CC lib/nvme/nvme.o 00:04:43.035 CC lib/nvme/nvme_quirks.o 00:04:43.035 CC lib/nvme/nvme_transport.o 00:04:43.035 CC lib/nvme/nvme_discovery.o 00:04:43.035 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:43.035 LIB libspdk_thread.a 00:04:43.035 SO libspdk_thread.so.11.0 00:04:43.293 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:43.293 CC lib/nvme/nvme_tcp.o 00:04:43.293 CC lib/nvme/nvme_opal.o 00:04:43.293 SYMLINK libspdk_thread.so 00:04:43.293 CC lib/accel/accel.o 00:04:43.551 CC lib/nvme/nvme_io_msg.o 00:04:43.551 CC lib/nvme/nvme_poll_group.o 00:04:43.809 CC lib/nvme/nvme_zns.o 00:04:43.810 CC lib/nvme/nvme_stubs.o 00:04:43.810 CC lib/nvme/nvme_auth.o 00:04:43.810 CC lib/nvme/nvme_cuse.o 00:04:43.810 CC lib/nvme/nvme_rdma.o 00:04:44.067 CC lib/accel/accel_rpc.o 00:04:44.325 CC lib/accel/accel_sw.o 00:04:44.325 CC lib/blob/blobstore.o 00:04:44.325 CC lib/blob/request.o 00:04:44.584 CC lib/init/json_config.o 00:04:44.842 LIB libspdk_accel.a 00:04:44.842 CC lib/blob/zeroes.o 00:04:44.842 CC lib/init/subsystem.o 00:04:44.842 SO libspdk_accel.so.16.0 00:04:44.842 SYMLINK libspdk_accel.so 00:04:45.101 CC lib/virtio/virtio.o 00:04:45.101 CC lib/init/subsystem_rpc.o 00:04:45.101 CC lib/init/rpc.o 00:04:45.101 CC lib/blob/blob_bs_dev.o 00:04:45.101 CC lib/virtio/virtio_vhost_user.o 00:04:45.101 CC lib/fsdev/fsdev.o 00:04:45.101 CC lib/bdev/bdev.o 00:04:45.101 CC lib/bdev/bdev_rpc.o 00:04:45.101 CC lib/fsdev/fsdev_io.o 00:04:45.360 LIB libspdk_init.a 00:04:45.360 SO libspdk_init.so.6.0 00:04:45.360 CC lib/virtio/virtio_vfio_user.o 00:04:45.360 CC lib/virtio/virtio_pci.o 00:04:45.360 SYMLINK libspdk_init.so 00:04:45.360 CC lib/bdev/bdev_zone.o 00:04:45.618 CC lib/bdev/part.o 00:04:45.618 CC lib/bdev/scsi_nvme.o 00:04:45.618 CC lib/fsdev/fsdev_rpc.o 00:04:45.618 LIB libspdk_nvme.a 00:04:45.877 LIB libspdk_virtio.a 00:04:45.877 CC lib/event/app.o 00:04:45.877 CC lib/event/reactor.o 00:04:45.877 SO libspdk_virtio.so.7.0 00:04:45.877 CC lib/event/log_rpc.o 00:04:45.877 SO libspdk_nvme.so.15.0 00:04:45.877 CC lib/event/app_rpc.o 00:04:45.877 CC lib/event/scheduler_static.o 00:04:45.877 SYMLINK libspdk_virtio.so 00:04:45.877 LIB libspdk_fsdev.a 00:04:46.137 SO libspdk_fsdev.so.2.0 00:04:46.137 SYMLINK libspdk_fsdev.so 00:04:46.137 SYMLINK libspdk_nvme.so 00:04:46.395 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:46.395 LIB libspdk_event.a 00:04:46.395 SO libspdk_event.so.14.0 00:04:46.654 SYMLINK libspdk_event.so 00:04:47.221 LIB libspdk_fuse_dispatcher.a 00:04:47.221 SO libspdk_fuse_dispatcher.so.1.0 00:04:47.221 SYMLINK libspdk_fuse_dispatcher.so 00:04:49.122 LIB libspdk_blob.a 00:04:49.122 SO libspdk_blob.so.12.0 00:04:49.122 LIB libspdk_bdev.a 00:04:49.122 SYMLINK libspdk_blob.so 00:04:49.122 SO libspdk_bdev.so.17.0 00:04:49.122 SYMLINK libspdk_bdev.so 00:04:49.122 CC lib/lvol/lvol.o 00:04:49.122 CC lib/blobfs/tree.o 00:04:49.122 CC lib/blobfs/blobfs.o 00:04:49.379 CC lib/nbd/nbd_rpc.o 00:04:49.379 CC lib/nbd/nbd.o 00:04:49.379 CC lib/ublk/ublk.o 00:04:49.379 CC lib/scsi/lun.o 00:04:49.379 CC lib/scsi/dev.o 00:04:49.379 CC lib/nvmf/ctrlr.o 00:04:49.379 CC lib/ftl/ftl_core.o 00:04:49.379 CC lib/ftl/ftl_init.o 00:04:49.638 CC lib/ftl/ftl_layout.o 00:04:49.638 CC lib/ftl/ftl_debug.o 00:04:49.638 CC lib/ftl/ftl_io.o 00:04:49.638 CC lib/scsi/port.o 00:04:49.896 CC lib/ublk/ublk_rpc.o 00:04:49.896 CC lib/scsi/scsi.o 00:04:49.896 CC lib/ftl/ftl_sb.o 00:04:49.896 CC lib/nvmf/ctrlr_discovery.o 00:04:49.896 LIB libspdk_nbd.a 00:04:49.896 CC lib/nvmf/ctrlr_bdev.o 00:04:50.153 CC lib/nvmf/subsystem.o 00:04:50.153 SO libspdk_nbd.so.7.0 00:04:50.153 CC lib/scsi/scsi_bdev.o 00:04:50.153 SYMLINK libspdk_nbd.so 00:04:50.153 CC lib/scsi/scsi_pr.o 00:04:50.153 CC lib/ftl/ftl_l2p.o 00:04:50.153 LIB libspdk_ublk.a 00:04:50.153 SO libspdk_ublk.so.3.0 00:04:50.413 SYMLINK libspdk_ublk.so 00:04:50.413 CC lib/ftl/ftl_l2p_flat.o 00:04:50.413 CC lib/scsi/scsi_rpc.o 00:04:50.413 LIB libspdk_blobfs.a 00:04:50.413 SO libspdk_blobfs.so.11.0 00:04:50.413 LIB libspdk_lvol.a 00:04:50.413 SO libspdk_lvol.so.11.0 00:04:50.413 SYMLINK libspdk_blobfs.so 00:04:50.413 CC lib/scsi/task.o 00:04:50.678 CC lib/ftl/ftl_nv_cache.o 00:04:50.678 CC lib/ftl/ftl_band.o 00:04:50.678 CC lib/ftl/ftl_band_ops.o 00:04:50.678 SYMLINK libspdk_lvol.so 00:04:50.678 CC lib/ftl/ftl_writer.o 00:04:50.678 CC lib/nvmf/nvmf.o 00:04:50.678 CC lib/nvmf/nvmf_rpc.o 00:04:50.678 LIB libspdk_scsi.a 00:04:50.936 SO libspdk_scsi.so.9.0 00:04:50.936 CC lib/nvmf/transport.o 00:04:50.936 CC lib/nvmf/tcp.o 00:04:50.936 SYMLINK libspdk_scsi.so 00:04:50.936 CC lib/nvmf/stubs.o 00:04:50.936 CC lib/nvmf/mdns_server.o 00:04:50.936 CC lib/nvmf/rdma.o 00:04:51.502 CC lib/nvmf/auth.o 00:04:51.759 CC lib/iscsi/conn.o 00:04:51.759 CC lib/iscsi/init_grp.o 00:04:51.759 CC lib/vhost/vhost.o 00:04:51.759 CC lib/vhost/vhost_rpc.o 00:04:51.759 CC lib/ftl/ftl_rq.o 00:04:51.759 CC lib/ftl/ftl_reloc.o 00:04:52.018 CC lib/ftl/ftl_l2p_cache.o 00:04:52.018 CC lib/ftl/ftl_p2l.o 00:04:52.275 CC lib/iscsi/iscsi.o 00:04:52.276 CC lib/ftl/ftl_p2l_log.o 00:04:52.533 CC lib/iscsi/param.o 00:04:52.533 CC lib/iscsi/portal_grp.o 00:04:52.533 CC lib/iscsi/tgt_node.o 00:04:52.533 CC lib/vhost/vhost_scsi.o 00:04:52.791 CC lib/ftl/mngt/ftl_mngt.o 00:04:52.791 CC lib/iscsi/iscsi_subsystem.o 00:04:52.791 CC lib/iscsi/iscsi_rpc.o 00:04:53.050 CC lib/vhost/vhost_blk.o 00:04:53.050 CC lib/vhost/rte_vhost_user.o 00:04:53.050 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:53.050 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:53.309 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:53.309 CC lib/iscsi/task.o 00:04:53.309 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:53.309 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:53.309 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:53.568 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:53.568 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:53.568 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:53.568 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:53.568 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:53.568 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:53.827 CC lib/ftl/utils/ftl_conf.o 00:04:53.827 CC lib/ftl/utils/ftl_md.o 00:04:53.827 CC lib/ftl/utils/ftl_mempool.o 00:04:53.827 CC lib/ftl/utils/ftl_bitmap.o 00:04:53.827 CC lib/ftl/utils/ftl_property.o 00:04:54.085 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:54.085 LIB libspdk_nvmf.a 00:04:54.085 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:54.085 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:54.085 LIB libspdk_iscsi.a 00:04:54.343 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:54.343 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:54.343 LIB libspdk_vhost.a 00:04:54.343 SO libspdk_iscsi.so.8.0 00:04:54.343 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:54.343 SO libspdk_nvmf.so.20.0 00:04:54.343 SO libspdk_vhost.so.8.0 00:04:54.343 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:54.343 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:54.343 SYMLINK libspdk_iscsi.so 00:04:54.343 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:54.343 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:54.343 SYMLINK libspdk_vhost.so 00:04:54.343 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:54.601 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:54.601 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:54.601 CC lib/ftl/base/ftl_base_dev.o 00:04:54.601 CC lib/ftl/base/ftl_base_bdev.o 00:04:54.601 SYMLINK libspdk_nvmf.so 00:04:54.601 CC lib/ftl/ftl_trace.o 00:04:54.860 LIB libspdk_ftl.a 00:04:55.118 SO libspdk_ftl.so.9.0 00:04:55.684 SYMLINK libspdk_ftl.so 00:04:55.943 CC module/env_dpdk/env_dpdk_rpc.o 00:04:55.943 CC module/fsdev/aio/fsdev_aio.o 00:04:55.943 CC module/blob/bdev/blob_bdev.o 00:04:55.943 CC module/accel/error/accel_error.o 00:04:55.943 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:55.943 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:55.943 CC module/keyring/file/keyring.o 00:04:55.943 CC module/keyring/linux/keyring.o 00:04:55.943 CC module/scheduler/gscheduler/gscheduler.o 00:04:55.943 CC module/sock/posix/posix.o 00:04:55.943 LIB libspdk_env_dpdk_rpc.a 00:04:55.943 SO libspdk_env_dpdk_rpc.so.6.0 00:04:56.201 SYMLINK libspdk_env_dpdk_rpc.so 00:04:56.201 CC module/accel/error/accel_error_rpc.o 00:04:56.201 CC module/keyring/linux/keyring_rpc.o 00:04:56.201 CC module/keyring/file/keyring_rpc.o 00:04:56.201 LIB libspdk_scheduler_dpdk_governor.a 00:04:56.201 LIB libspdk_scheduler_gscheduler.a 00:04:56.201 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:56.201 SO libspdk_scheduler_gscheduler.so.4.0 00:04:56.201 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:56.201 LIB libspdk_scheduler_dynamic.a 00:04:56.201 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:56.201 CC module/fsdev/aio/linux_aio_mgr.o 00:04:56.202 SO libspdk_scheduler_dynamic.so.4.0 00:04:56.202 LIB libspdk_keyring_linux.a 00:04:56.202 SYMLINK libspdk_scheduler_gscheduler.so 00:04:56.202 LIB libspdk_accel_error.a 00:04:56.202 LIB libspdk_blob_bdev.a 00:04:56.461 SO libspdk_keyring_linux.so.1.0 00:04:56.461 SO libspdk_blob_bdev.so.12.0 00:04:56.461 LIB libspdk_keyring_file.a 00:04:56.461 SO libspdk_accel_error.so.2.0 00:04:56.461 SYMLINK libspdk_scheduler_dynamic.so 00:04:56.461 SO libspdk_keyring_file.so.2.0 00:04:56.461 SYMLINK libspdk_keyring_linux.so 00:04:56.461 SYMLINK libspdk_blob_bdev.so 00:04:56.461 SYMLINK libspdk_accel_error.so 00:04:56.461 SYMLINK libspdk_keyring_file.so 00:04:56.461 CC module/accel/ioat/accel_ioat.o 00:04:56.720 CC module/accel/dsa/accel_dsa.o 00:04:56.720 CC module/accel/iaa/accel_iaa.o 00:04:56.720 CC module/bdev/error/vbdev_error.o 00:04:56.720 CC module/blobfs/bdev/blobfs_bdev.o 00:04:56.720 CC module/bdev/gpt/gpt.o 00:04:56.720 CC module/bdev/lvol/vbdev_lvol.o 00:04:56.720 CC module/bdev/delay/vbdev_delay.o 00:04:56.720 CC module/accel/ioat/accel_ioat_rpc.o 00:04:56.720 LIB libspdk_fsdev_aio.a 00:04:56.979 CC module/accel/iaa/accel_iaa_rpc.o 00:04:56.979 SO libspdk_fsdev_aio.so.1.0 00:04:56.979 LIB libspdk_accel_ioat.a 00:04:56.979 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:56.979 CC module/bdev/gpt/vbdev_gpt.o 00:04:56.979 LIB libspdk_sock_posix.a 00:04:56.979 SO libspdk_accel_ioat.so.6.0 00:04:56.979 SYMLINK libspdk_fsdev_aio.so 00:04:56.979 CC module/accel/dsa/accel_dsa_rpc.o 00:04:56.979 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:56.979 SO libspdk_sock_posix.so.6.0 00:04:56.979 SYMLINK libspdk_accel_ioat.so 00:04:56.979 CC module/bdev/error/vbdev_error_rpc.o 00:04:56.979 LIB libspdk_accel_iaa.a 00:04:56.979 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:56.979 SO libspdk_accel_iaa.so.3.0 00:04:56.979 SYMLINK libspdk_sock_posix.so 00:04:57.238 LIB libspdk_blobfs_bdev.a 00:04:57.238 SYMLINK libspdk_accel_iaa.so 00:04:57.238 LIB libspdk_accel_dsa.a 00:04:57.238 SO libspdk_blobfs_bdev.so.6.0 00:04:57.238 SO libspdk_accel_dsa.so.5.0 00:04:57.238 LIB libspdk_bdev_error.a 00:04:57.238 SYMLINK libspdk_blobfs_bdev.so 00:04:57.238 SYMLINK libspdk_accel_dsa.so 00:04:57.238 LIB libspdk_bdev_delay.a 00:04:57.238 SO libspdk_bdev_error.so.6.0 00:04:57.238 LIB libspdk_bdev_gpt.a 00:04:57.238 CC module/bdev/malloc/bdev_malloc.o 00:04:57.238 SO libspdk_bdev_delay.so.6.0 00:04:57.238 SO libspdk_bdev_gpt.so.6.0 00:04:57.238 CC module/bdev/null/bdev_null.o 00:04:57.238 SYMLINK libspdk_bdev_error.so 00:04:57.238 SYMLINK libspdk_bdev_delay.so 00:04:57.238 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:57.238 CC module/bdev/null/bdev_null_rpc.o 00:04:57.496 SYMLINK libspdk_bdev_gpt.so 00:04:57.496 CC module/bdev/nvme/bdev_nvme.o 00:04:57.496 CC module/bdev/passthru/vbdev_passthru.o 00:04:57.496 LIB libspdk_bdev_lvol.a 00:04:57.496 CC module/bdev/raid/bdev_raid.o 00:04:57.496 SO libspdk_bdev_lvol.so.6.0 00:04:57.496 CC module/bdev/raid/bdev_raid_rpc.o 00:04:57.496 CC module/bdev/raid/bdev_raid_sb.o 00:04:57.496 SYMLINK libspdk_bdev_lvol.so 00:04:57.496 CC module/bdev/raid/raid0.o 00:04:57.496 CC module/bdev/split/vbdev_split.o 00:04:57.496 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:57.755 LIB libspdk_bdev_null.a 00:04:57.755 SO libspdk_bdev_null.so.6.0 00:04:57.755 LIB libspdk_bdev_malloc.a 00:04:57.755 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:57.755 SO libspdk_bdev_malloc.so.6.0 00:04:57.755 SYMLINK libspdk_bdev_null.so 00:04:57.755 CC module/bdev/split/vbdev_split_rpc.o 00:04:57.755 CC module/bdev/raid/raid1.o 00:04:57.755 SYMLINK libspdk_bdev_malloc.so 00:04:58.025 CC module/bdev/raid/concat.o 00:04:58.025 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:58.025 LIB libspdk_bdev_passthru.a 00:04:58.025 CC module/bdev/xnvme/bdev_xnvme.o 00:04:58.025 SO libspdk_bdev_passthru.so.6.0 00:04:58.025 LIB libspdk_bdev_split.a 00:04:58.025 CC module/bdev/aio/bdev_aio.o 00:04:58.025 SYMLINK libspdk_bdev_passthru.so 00:04:58.025 SO libspdk_bdev_split.so.6.0 00:04:58.025 CC module/bdev/aio/bdev_aio_rpc.o 00:04:58.025 LIB libspdk_bdev_zone_block.a 00:04:58.025 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:58.288 SO libspdk_bdev_zone_block.so.6.0 00:04:58.288 SYMLINK libspdk_bdev_split.so 00:04:58.288 CC module/bdev/nvme/nvme_rpc.o 00:04:58.288 CC module/bdev/nvme/bdev_mdns_client.o 00:04:58.288 SYMLINK libspdk_bdev_zone_block.so 00:04:58.288 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:58.288 CC module/bdev/ftl/bdev_ftl.o 00:04:58.288 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:58.288 CC module/bdev/nvme/vbdev_opal.o 00:04:58.547 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:58.547 LIB libspdk_bdev_xnvme.a 00:04:58.547 SO libspdk_bdev_xnvme.so.3.0 00:04:58.547 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:58.547 LIB libspdk_bdev_aio.a 00:04:58.547 SO libspdk_bdev_aio.so.6.0 00:04:58.547 SYMLINK libspdk_bdev_xnvme.so 00:04:58.547 SYMLINK libspdk_bdev_aio.so 00:04:58.547 LIB libspdk_bdev_ftl.a 00:04:58.547 SO libspdk_bdev_ftl.so.6.0 00:04:58.806 CC module/bdev/iscsi/bdev_iscsi.o 00:04:58.806 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:58.806 SYMLINK libspdk_bdev_ftl.so 00:04:58.806 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:58.806 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:58.806 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:58.806 LIB libspdk_bdev_raid.a 00:04:58.806 SO libspdk_bdev_raid.so.6.0 00:04:59.065 SYMLINK libspdk_bdev_raid.so 00:04:59.324 LIB libspdk_bdev_iscsi.a 00:04:59.324 SO libspdk_bdev_iscsi.so.6.0 00:04:59.324 SYMLINK libspdk_bdev_iscsi.so 00:04:59.581 LIB libspdk_bdev_virtio.a 00:04:59.581 SO libspdk_bdev_virtio.so.6.0 00:04:59.838 SYMLINK libspdk_bdev_virtio.so 00:05:01.213 LIB libspdk_bdev_nvme.a 00:05:01.213 SO libspdk_bdev_nvme.so.7.1 00:05:01.213 SYMLINK libspdk_bdev_nvme.so 00:05:01.778 CC module/event/subsystems/vmd/vmd.o 00:05:01.778 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:01.778 CC module/event/subsystems/sock/sock.o 00:05:01.778 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:01.778 CC module/event/subsystems/iobuf/iobuf.o 00:05:01.778 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:01.778 CC module/event/subsystems/fsdev/fsdev.o 00:05:01.778 CC module/event/subsystems/keyring/keyring.o 00:05:01.778 CC module/event/subsystems/scheduler/scheduler.o 00:05:02.033 LIB libspdk_event_vhost_blk.a 00:05:02.033 LIB libspdk_event_keyring.a 00:05:02.033 LIB libspdk_event_vmd.a 00:05:02.033 LIB libspdk_event_scheduler.a 00:05:02.033 LIB libspdk_event_sock.a 00:05:02.033 SO libspdk_event_vhost_blk.so.3.0 00:05:02.033 SO libspdk_event_keyring.so.1.0 00:05:02.033 SO libspdk_event_vmd.so.6.0 00:05:02.033 SO libspdk_event_scheduler.so.4.0 00:05:02.033 SO libspdk_event_sock.so.5.0 00:05:02.033 SYMLINK libspdk_event_keyring.so 00:05:02.033 SYMLINK libspdk_event_vhost_blk.so 00:05:02.033 SYMLINK libspdk_event_vmd.so 00:05:02.033 LIB libspdk_event_fsdev.a 00:05:02.033 SYMLINK libspdk_event_scheduler.so 00:05:02.033 LIB libspdk_event_iobuf.a 00:05:02.033 SYMLINK libspdk_event_sock.so 00:05:02.033 SO libspdk_event_fsdev.so.1.0 00:05:02.033 SO libspdk_event_iobuf.so.3.0 00:05:02.033 SYMLINK libspdk_event_fsdev.so 00:05:02.033 SYMLINK libspdk_event_iobuf.so 00:05:02.290 CC module/event/subsystems/accel/accel.o 00:05:02.548 LIB libspdk_event_accel.a 00:05:02.548 SO libspdk_event_accel.so.6.0 00:05:02.803 SYMLINK libspdk_event_accel.so 00:05:03.060 CC module/event/subsystems/bdev/bdev.o 00:05:03.318 LIB libspdk_event_bdev.a 00:05:03.318 SO libspdk_event_bdev.so.6.0 00:05:03.318 SYMLINK libspdk_event_bdev.so 00:05:03.575 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:03.575 CC module/event/subsystems/scsi/scsi.o 00:05:03.575 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:03.575 CC module/event/subsystems/nbd/nbd.o 00:05:03.575 CC module/event/subsystems/ublk/ublk.o 00:05:03.831 LIB libspdk_event_ublk.a 00:05:03.831 LIB libspdk_event_nbd.a 00:05:03.831 LIB libspdk_event_scsi.a 00:05:03.831 SO libspdk_event_nbd.so.6.0 00:05:03.831 SO libspdk_event_ublk.so.3.0 00:05:03.831 SO libspdk_event_scsi.so.6.0 00:05:03.831 LIB libspdk_event_nvmf.a 00:05:03.831 SYMLINK libspdk_event_ublk.so 00:05:03.831 SYMLINK libspdk_event_nbd.so 00:05:03.831 SYMLINK libspdk_event_scsi.so 00:05:03.832 SO libspdk_event_nvmf.so.6.0 00:05:03.832 SYMLINK libspdk_event_nvmf.so 00:05:04.088 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:04.088 CC module/event/subsystems/iscsi/iscsi.o 00:05:04.346 LIB libspdk_event_vhost_scsi.a 00:05:04.346 SO libspdk_event_vhost_scsi.so.3.0 00:05:04.346 LIB libspdk_event_iscsi.a 00:05:04.346 SYMLINK libspdk_event_vhost_scsi.so 00:05:04.346 SO libspdk_event_iscsi.so.6.0 00:05:04.346 SYMLINK libspdk_event_iscsi.so 00:05:04.602 SO libspdk.so.6.0 00:05:04.602 SYMLINK libspdk.so 00:05:04.859 CC app/trace_record/trace_record.o 00:05:04.859 CXX app/trace/trace.o 00:05:04.859 CC app/spdk_nvme_perf/perf.o 00:05:04.859 CC app/spdk_lspci/spdk_lspci.o 00:05:04.859 CC app/iscsi_tgt/iscsi_tgt.o 00:05:04.859 CC app/nvmf_tgt/nvmf_main.o 00:05:04.859 CC app/spdk_tgt/spdk_tgt.o 00:05:04.859 CC test/thread/poller_perf/poller_perf.o 00:05:04.859 CC examples/util/zipf/zipf.o 00:05:04.859 CC examples/ioat/perf/perf.o 00:05:05.117 LINK spdk_lspci 00:05:05.117 LINK poller_perf 00:05:05.117 LINK iscsi_tgt 00:05:05.117 LINK nvmf_tgt 00:05:05.117 LINK zipf 00:05:05.374 LINK ioat_perf 00:05:05.374 LINK spdk_tgt 00:05:05.374 LINK spdk_trace_record 00:05:05.374 CC app/spdk_nvme_identify/identify.o 00:05:05.374 LINK spdk_trace 00:05:05.630 CC examples/ioat/verify/verify.o 00:05:05.630 CC app/spdk_nvme_discover/discovery_aer.o 00:05:05.630 CC app/spdk_top/spdk_top.o 00:05:05.630 CC test/dma/test_dma/test_dma.o 00:05:05.630 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:05.630 CC app/spdk_dd/spdk_dd.o 00:05:05.888 CC app/fio/nvme/fio_plugin.o 00:05:05.889 LINK spdk_nvme_discover 00:05:05.889 LINK verify 00:05:05.889 CC test/app/bdev_svc/bdev_svc.o 00:05:05.889 LINK interrupt_tgt 00:05:06.146 LINK bdev_svc 00:05:06.146 LINK spdk_nvme_perf 00:05:06.146 LINK spdk_dd 00:05:06.146 CC examples/sock/hello_world/hello_sock.o 00:05:06.146 LINK test_dma 00:05:06.403 CC examples/thread/thread/thread_ex.o 00:05:06.403 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:06.403 LINK spdk_nvme_identify 00:05:06.661 CC examples/vmd/lsvmd/lsvmd.o 00:05:06.661 CC examples/idxd/perf/perf.o 00:05:06.661 CC examples/vmd/led/led.o 00:05:06.661 LINK hello_sock 00:05:06.661 LINK thread 00:05:06.661 CC app/vhost/vhost.o 00:05:06.661 LINK spdk_nvme 00:05:06.661 LINK lsvmd 00:05:06.661 LINK led 00:05:06.918 TEST_HEADER include/spdk/accel.h 00:05:06.918 TEST_HEADER include/spdk/accel_module.h 00:05:06.918 TEST_HEADER include/spdk/assert.h 00:05:06.918 TEST_HEADER include/spdk/barrier.h 00:05:06.918 TEST_HEADER include/spdk/base64.h 00:05:06.918 TEST_HEADER include/spdk/bdev.h 00:05:06.918 TEST_HEADER include/spdk/bdev_module.h 00:05:06.918 TEST_HEADER include/spdk/bdev_zone.h 00:05:06.918 TEST_HEADER include/spdk/bit_array.h 00:05:06.918 TEST_HEADER include/spdk/bit_pool.h 00:05:06.918 LINK spdk_top 00:05:06.918 TEST_HEADER include/spdk/blob_bdev.h 00:05:06.918 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:06.918 TEST_HEADER include/spdk/blobfs.h 00:05:06.918 TEST_HEADER include/spdk/blob.h 00:05:06.918 TEST_HEADER include/spdk/conf.h 00:05:06.918 TEST_HEADER include/spdk/config.h 00:05:06.919 TEST_HEADER include/spdk/cpuset.h 00:05:06.919 TEST_HEADER include/spdk/crc16.h 00:05:06.919 TEST_HEADER include/spdk/crc32.h 00:05:06.919 TEST_HEADER include/spdk/crc64.h 00:05:06.919 TEST_HEADER include/spdk/dif.h 00:05:06.919 TEST_HEADER include/spdk/dma.h 00:05:06.919 TEST_HEADER include/spdk/endian.h 00:05:06.919 LINK vhost 00:05:06.919 TEST_HEADER include/spdk/env_dpdk.h 00:05:06.919 TEST_HEADER include/spdk/env.h 00:05:06.919 TEST_HEADER include/spdk/event.h 00:05:06.919 TEST_HEADER include/spdk/fd_group.h 00:05:06.919 TEST_HEADER include/spdk/fd.h 00:05:06.919 TEST_HEADER include/spdk/file.h 00:05:06.919 TEST_HEADER include/spdk/fsdev.h 00:05:06.919 TEST_HEADER include/spdk/fsdev_module.h 00:05:06.919 TEST_HEADER include/spdk/ftl.h 00:05:06.919 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:06.919 TEST_HEADER include/spdk/gpt_spec.h 00:05:06.919 TEST_HEADER include/spdk/hexlify.h 00:05:06.919 TEST_HEADER include/spdk/histogram_data.h 00:05:06.919 TEST_HEADER include/spdk/idxd.h 00:05:06.919 TEST_HEADER include/spdk/idxd_spec.h 00:05:06.919 TEST_HEADER include/spdk/init.h 00:05:06.919 TEST_HEADER include/spdk/ioat.h 00:05:06.919 CC app/fio/bdev/fio_plugin.o 00:05:06.919 TEST_HEADER include/spdk/ioat_spec.h 00:05:06.919 TEST_HEADER include/spdk/iscsi_spec.h 00:05:06.919 TEST_HEADER include/spdk/json.h 00:05:06.919 TEST_HEADER include/spdk/jsonrpc.h 00:05:06.919 TEST_HEADER include/spdk/keyring.h 00:05:06.919 TEST_HEADER include/spdk/keyring_module.h 00:05:06.919 TEST_HEADER include/spdk/likely.h 00:05:06.919 TEST_HEADER include/spdk/log.h 00:05:06.919 TEST_HEADER include/spdk/lvol.h 00:05:06.919 TEST_HEADER include/spdk/md5.h 00:05:06.919 TEST_HEADER include/spdk/memory.h 00:05:06.919 TEST_HEADER include/spdk/mmio.h 00:05:06.919 TEST_HEADER include/spdk/nbd.h 00:05:06.919 TEST_HEADER include/spdk/net.h 00:05:06.919 TEST_HEADER include/spdk/notify.h 00:05:06.919 TEST_HEADER include/spdk/nvme.h 00:05:06.919 TEST_HEADER include/spdk/nvme_intel.h 00:05:06.919 LINK nvme_fuzz 00:05:06.919 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:06.919 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:06.919 TEST_HEADER include/spdk/nvme_spec.h 00:05:06.919 TEST_HEADER include/spdk/nvme_zns.h 00:05:06.919 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:06.919 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:06.919 TEST_HEADER include/spdk/nvmf.h 00:05:06.919 TEST_HEADER include/spdk/nvmf_spec.h 00:05:06.919 TEST_HEADER include/spdk/nvmf_transport.h 00:05:06.919 TEST_HEADER include/spdk/opal.h 00:05:06.919 TEST_HEADER include/spdk/opal_spec.h 00:05:06.919 TEST_HEADER include/spdk/pci_ids.h 00:05:06.919 TEST_HEADER include/spdk/pipe.h 00:05:06.919 TEST_HEADER include/spdk/queue.h 00:05:06.919 TEST_HEADER include/spdk/reduce.h 00:05:06.919 TEST_HEADER include/spdk/rpc.h 00:05:06.919 TEST_HEADER include/spdk/scheduler.h 00:05:06.919 TEST_HEADER include/spdk/scsi.h 00:05:06.919 TEST_HEADER include/spdk/scsi_spec.h 00:05:06.919 TEST_HEADER include/spdk/sock.h 00:05:06.919 LINK idxd_perf 00:05:06.919 TEST_HEADER include/spdk/stdinc.h 00:05:06.919 TEST_HEADER include/spdk/string.h 00:05:06.919 TEST_HEADER include/spdk/thread.h 00:05:06.919 TEST_HEADER include/spdk/trace.h 00:05:06.919 TEST_HEADER include/spdk/trace_parser.h 00:05:06.919 TEST_HEADER include/spdk/tree.h 00:05:06.919 TEST_HEADER include/spdk/ublk.h 00:05:06.919 TEST_HEADER include/spdk/util.h 00:05:06.919 TEST_HEADER include/spdk/uuid.h 00:05:06.919 TEST_HEADER include/spdk/version.h 00:05:07.177 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:07.177 CC test/app/histogram_perf/histogram_perf.o 00:05:07.177 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:07.177 TEST_HEADER include/spdk/vhost.h 00:05:07.177 TEST_HEADER include/spdk/vmd.h 00:05:07.177 TEST_HEADER include/spdk/xor.h 00:05:07.177 TEST_HEADER include/spdk/zipf.h 00:05:07.177 CXX test/cpp_headers/accel.o 00:05:07.177 CC test/env/mem_callbacks/mem_callbacks.o 00:05:07.177 CC test/app/jsoncat/jsoncat.o 00:05:07.177 CC test/event/event_perf/event_perf.o 00:05:07.177 CC test/app/stub/stub.o 00:05:07.177 CC test/event/reactor/reactor.o 00:05:07.177 LINK histogram_perf 00:05:07.177 CXX test/cpp_headers/accel_module.o 00:05:07.437 LINK jsoncat 00:05:07.437 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:07.437 LINK event_perf 00:05:07.437 LINK reactor 00:05:07.437 CC examples/nvme/hello_world/hello_world.o 00:05:07.437 LINK stub 00:05:07.437 CXX test/cpp_headers/assert.o 00:05:07.696 CC test/event/reactor_perf/reactor_perf.o 00:05:07.696 LINK spdk_bdev 00:05:07.696 CC test/event/app_repeat/app_repeat.o 00:05:07.696 CXX test/cpp_headers/barrier.o 00:05:07.696 CC test/event/scheduler/scheduler.o 00:05:07.696 LINK hello_world 00:05:07.696 LINK reactor_perf 00:05:07.696 LINK mem_callbacks 00:05:07.955 CC test/nvme/aer/aer.o 00:05:07.955 LINK app_repeat 00:05:07.955 CXX test/cpp_headers/base64.o 00:05:07.955 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:07.955 CC examples/accel/perf/accel_perf.o 00:05:07.955 LINK scheduler 00:05:08.214 CC examples/nvme/reconnect/reconnect.o 00:05:08.214 CXX test/cpp_headers/bdev.o 00:05:08.214 CC test/env/vtophys/vtophys.o 00:05:08.214 CXX test/cpp_headers/bdev_module.o 00:05:08.214 LINK aer 00:05:08.214 LINK hello_fsdev 00:05:08.214 CXX test/cpp_headers/bdev_zone.o 00:05:08.214 CC examples/blob/hello_world/hello_blob.o 00:05:08.214 LINK vtophys 00:05:08.473 CC test/nvme/reset/reset.o 00:05:08.473 CC examples/blob/cli/blobcli.o 00:05:08.473 CXX test/cpp_headers/bit_array.o 00:05:08.473 CC test/rpc_client/rpc_client_test.o 00:05:08.473 LINK reconnect 00:05:08.473 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:08.473 LINK hello_blob 00:05:08.732 LINK accel_perf 00:05:08.732 CXX test/cpp_headers/bit_pool.o 00:05:08.732 CC test/accel/dif/dif.o 00:05:08.732 LINK env_dpdk_post_init 00:05:08.732 LINK rpc_client_test 00:05:08.732 LINK reset 00:05:08.732 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:08.989 CXX test/cpp_headers/blob_bdev.o 00:05:08.989 CXX test/cpp_headers/blobfs_bdev.o 00:05:08.989 CC examples/nvme/arbitration/arbitration.o 00:05:08.989 CXX test/cpp_headers/blobfs.o 00:05:08.989 CC test/env/memory/memory_ut.o 00:05:08.989 CC test/nvme/sgl/sgl.o 00:05:09.248 LINK blobcli 00:05:09.248 CXX test/cpp_headers/blob.o 00:05:09.248 CC test/nvme/e2edp/nvme_dp.o 00:05:09.248 CC test/nvme/overhead/overhead.o 00:05:09.248 LINK arbitration 00:05:09.248 CXX test/cpp_headers/conf.o 00:05:09.506 CC test/nvme/err_injection/err_injection.o 00:05:09.506 LINK nvme_manage 00:05:09.506 LINK sgl 00:05:09.506 LINK nvme_dp 00:05:09.506 CXX test/cpp_headers/config.o 00:05:09.506 LINK overhead 00:05:09.506 CC test/nvme/startup/startup.o 00:05:09.506 LINK dif 00:05:09.506 CXX test/cpp_headers/cpuset.o 00:05:09.851 LINK err_injection 00:05:09.851 LINK iscsi_fuzz 00:05:09.851 CXX test/cpp_headers/crc16.o 00:05:09.851 CC examples/nvme/hotplug/hotplug.o 00:05:09.851 LINK startup 00:05:09.851 CC test/nvme/reserve/reserve.o 00:05:10.134 CXX test/cpp_headers/crc32.o 00:05:10.134 CC test/blobfs/mkfs/mkfs.o 00:05:10.134 CC examples/bdev/hello_world/hello_bdev.o 00:05:10.134 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:10.134 CC test/lvol/esnap/esnap.o 00:05:10.134 CC test/nvme/simple_copy/simple_copy.o 00:05:10.134 CC test/bdev/bdevio/bdevio.o 00:05:10.134 LINK hotplug 00:05:10.134 CXX test/cpp_headers/crc64.o 00:05:10.134 LINK mkfs 00:05:10.134 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:10.134 LINK reserve 00:05:10.392 LINK hello_bdev 00:05:10.392 CXX test/cpp_headers/dif.o 00:05:10.392 LINK simple_copy 00:05:10.392 CXX test/cpp_headers/dma.o 00:05:10.392 CXX test/cpp_headers/endian.o 00:05:10.392 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:10.650 CXX test/cpp_headers/env_dpdk.o 00:05:10.650 LINK memory_ut 00:05:10.650 CXX test/cpp_headers/env.o 00:05:10.650 LINK bdevio 00:05:10.650 CC test/nvme/connect_stress/connect_stress.o 00:05:10.650 LINK cmb_copy 00:05:10.650 CC examples/bdev/bdevperf/bdevperf.o 00:05:10.650 CC test/env/pci/pci_ut.o 00:05:10.650 LINK vhost_fuzz 00:05:10.908 CXX test/cpp_headers/event.o 00:05:10.908 CC test/nvme/boot_partition/boot_partition.o 00:05:10.908 LINK connect_stress 00:05:10.908 CC examples/nvme/abort/abort.o 00:05:10.908 CXX test/cpp_headers/fd_group.o 00:05:10.908 CXX test/cpp_headers/fd.o 00:05:10.908 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:11.167 LINK boot_partition 00:05:11.167 CXX test/cpp_headers/file.o 00:05:11.167 CXX test/cpp_headers/fsdev.o 00:05:11.167 CXX test/cpp_headers/fsdev_module.o 00:05:11.167 CC test/nvme/compliance/nvme_compliance.o 00:05:11.167 LINK pmr_persistence 00:05:11.167 LINK pci_ut 00:05:11.425 CXX test/cpp_headers/ftl.o 00:05:11.425 CC test/nvme/fused_ordering/fused_ordering.o 00:05:11.425 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:11.425 CXX test/cpp_headers/fuse_dispatcher.o 00:05:11.425 LINK abort 00:05:11.425 CC test/nvme/fdp/fdp.o 00:05:11.749 CXX test/cpp_headers/gpt_spec.o 00:05:11.749 CXX test/cpp_headers/hexlify.o 00:05:11.749 LINK nvme_compliance 00:05:11.749 CXX test/cpp_headers/histogram_data.o 00:05:11.749 LINK fused_ordering 00:05:11.749 LINK doorbell_aers 00:05:11.749 CC test/nvme/cuse/cuse.o 00:05:11.749 CXX test/cpp_headers/idxd.o 00:05:11.749 CXX test/cpp_headers/idxd_spec.o 00:05:11.749 CXX test/cpp_headers/init.o 00:05:11.749 CXX test/cpp_headers/ioat.o 00:05:11.749 CXX test/cpp_headers/ioat_spec.o 00:05:12.006 LINK bdevperf 00:05:12.006 CXX test/cpp_headers/iscsi_spec.o 00:05:12.006 LINK fdp 00:05:12.006 CXX test/cpp_headers/json.o 00:05:12.006 CXX test/cpp_headers/jsonrpc.o 00:05:12.006 CXX test/cpp_headers/keyring.o 00:05:12.006 CXX test/cpp_headers/keyring_module.o 00:05:12.006 CXX test/cpp_headers/likely.o 00:05:12.006 CXX test/cpp_headers/log.o 00:05:12.264 CXX test/cpp_headers/lvol.o 00:05:12.264 CXX test/cpp_headers/md5.o 00:05:12.264 CXX test/cpp_headers/memory.o 00:05:12.264 CXX test/cpp_headers/mmio.o 00:05:12.264 CXX test/cpp_headers/nbd.o 00:05:12.264 CXX test/cpp_headers/net.o 00:05:12.264 CXX test/cpp_headers/notify.o 00:05:12.264 CXX test/cpp_headers/nvme.o 00:05:12.264 CXX test/cpp_headers/nvme_intel.o 00:05:12.264 CXX test/cpp_headers/nvme_ocssd.o 00:05:12.264 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:12.523 CC examples/nvmf/nvmf/nvmf.o 00:05:12.523 CXX test/cpp_headers/nvme_spec.o 00:05:12.523 CXX test/cpp_headers/nvme_zns.o 00:05:12.523 CXX test/cpp_headers/nvmf_cmd.o 00:05:12.523 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:12.523 CXX test/cpp_headers/nvmf.o 00:05:12.523 CXX test/cpp_headers/nvmf_spec.o 00:05:12.523 CXX test/cpp_headers/nvmf_transport.o 00:05:12.523 CXX test/cpp_headers/opal.o 00:05:12.781 CXX test/cpp_headers/opal_spec.o 00:05:12.781 CXX test/cpp_headers/pci_ids.o 00:05:12.781 CXX test/cpp_headers/pipe.o 00:05:12.781 CXX test/cpp_headers/queue.o 00:05:12.781 LINK nvmf 00:05:12.781 CXX test/cpp_headers/reduce.o 00:05:12.781 CXX test/cpp_headers/rpc.o 00:05:12.781 CXX test/cpp_headers/scheduler.o 00:05:12.781 CXX test/cpp_headers/scsi.o 00:05:12.781 CXX test/cpp_headers/scsi_spec.o 00:05:12.781 CXX test/cpp_headers/sock.o 00:05:13.038 CXX test/cpp_headers/stdinc.o 00:05:13.038 CXX test/cpp_headers/string.o 00:05:13.038 CXX test/cpp_headers/thread.o 00:05:13.038 CXX test/cpp_headers/trace.o 00:05:13.038 CXX test/cpp_headers/trace_parser.o 00:05:13.038 CXX test/cpp_headers/tree.o 00:05:13.038 CXX test/cpp_headers/ublk.o 00:05:13.038 CXX test/cpp_headers/util.o 00:05:13.038 CXX test/cpp_headers/uuid.o 00:05:13.038 CXX test/cpp_headers/version.o 00:05:13.038 CXX test/cpp_headers/vfio_user_pci.o 00:05:13.038 CXX test/cpp_headers/vfio_user_spec.o 00:05:13.296 CXX test/cpp_headers/vhost.o 00:05:13.296 CXX test/cpp_headers/vmd.o 00:05:13.296 CXX test/cpp_headers/xor.o 00:05:13.296 CXX test/cpp_headers/zipf.o 00:05:13.554 LINK cuse 00:05:17.783 LINK esnap 00:05:18.042 00:05:18.042 real 1m43.123s 00:05:18.042 user 9m29.600s 00:05:18.042 sys 1m50.355s 00:05:18.042 18:06:52 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:18.042 ************************************ 00:05:18.042 END TEST make 00:05:18.042 ************************************ 00:05:18.042 18:06:52 make -- common/autotest_common.sh@10 -- $ set +x 00:05:18.042 18:06:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:18.042 18:06:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:18.042 18:06:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:18.042 18:06:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.042 18:06:52 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:18.042 18:06:52 -- pm/common@44 -- $ pid=5341 00:05:18.042 18:06:52 -- pm/common@50 -- $ kill -TERM 5341 00:05:18.042 18:06:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.042 18:06:52 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:18.042 18:06:52 -- pm/common@44 -- $ pid=5343 00:05:18.042 18:06:52 -- pm/common@50 -- $ kill -TERM 5343 00:05:18.042 18:06:52 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:18.042 18:06:52 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:18.042 18:06:52 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.042 18:06:52 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.042 18:06:52 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.042 18:06:52 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.042 18:06:52 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.042 18:06:52 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.042 18:06:52 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.042 18:06:52 -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.042 18:06:52 -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.042 18:06:52 -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.042 18:06:52 -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.042 18:06:52 -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.042 18:06:52 -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.042 18:06:52 -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.042 18:06:52 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.042 18:06:52 -- scripts/common.sh@344 -- # case "$op" in 00:05:18.042 18:06:52 -- scripts/common.sh@345 -- # : 1 00:05:18.042 18:06:52 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.042 18:06:52 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.042 18:06:52 -- scripts/common.sh@365 -- # decimal 1 00:05:18.042 18:06:52 -- scripts/common.sh@353 -- # local d=1 00:05:18.042 18:06:52 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.042 18:06:52 -- scripts/common.sh@355 -- # echo 1 00:05:18.042 18:06:52 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.042 18:06:52 -- scripts/common.sh@366 -- # decimal 2 00:05:18.042 18:06:52 -- scripts/common.sh@353 -- # local d=2 00:05:18.042 18:06:52 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.042 18:06:52 -- scripts/common.sh@355 -- # echo 2 00:05:18.042 18:06:52 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.042 18:06:52 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.042 18:06:52 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.042 18:06:52 -- scripts/common.sh@368 -- # return 0 00:05:18.042 18:06:52 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.042 18:06:52 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.042 --rc genhtml_branch_coverage=1 00:05:18.042 --rc genhtml_function_coverage=1 00:05:18.042 --rc genhtml_legend=1 00:05:18.042 --rc geninfo_all_blocks=1 00:05:18.042 --rc geninfo_unexecuted_blocks=1 00:05:18.042 00:05:18.042 ' 00:05:18.042 18:06:52 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.042 --rc genhtml_branch_coverage=1 00:05:18.042 --rc genhtml_function_coverage=1 00:05:18.042 --rc genhtml_legend=1 00:05:18.042 --rc geninfo_all_blocks=1 00:05:18.042 --rc geninfo_unexecuted_blocks=1 00:05:18.042 00:05:18.042 ' 00:05:18.042 18:06:52 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.042 --rc genhtml_branch_coverage=1 00:05:18.042 --rc genhtml_function_coverage=1 00:05:18.042 --rc genhtml_legend=1 00:05:18.042 --rc geninfo_all_blocks=1 00:05:18.042 --rc geninfo_unexecuted_blocks=1 00:05:18.042 00:05:18.042 ' 00:05:18.042 18:06:52 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.042 --rc genhtml_branch_coverage=1 00:05:18.042 --rc genhtml_function_coverage=1 00:05:18.042 --rc genhtml_legend=1 00:05:18.042 --rc geninfo_all_blocks=1 00:05:18.042 --rc geninfo_unexecuted_blocks=1 00:05:18.042 00:05:18.042 ' 00:05:18.042 18:06:52 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:18.042 18:06:52 -- nvmf/common.sh@7 -- # uname -s 00:05:18.042 18:06:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.042 18:06:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.042 18:06:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.042 18:06:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.042 18:06:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.042 18:06:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.042 18:06:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.042 18:06:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.042 18:06:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.042 18:06:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.042 18:06:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:337a1433-e489-415d-a6d5-4412432ba66c 00:05:18.042 18:06:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=337a1433-e489-415d-a6d5-4412432ba66c 00:05:18.042 18:06:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.300 18:06:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.300 18:06:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.300 18:06:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.300 18:06:52 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:18.300 18:06:52 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.300 18:06:52 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.300 18:06:52 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.300 18:06:52 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.300 18:06:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.300 18:06:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.300 18:06:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.300 18:06:52 -- paths/export.sh@5 -- # export PATH 00:05:18.300 18:06:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.300 18:06:52 -- nvmf/common.sh@51 -- # : 0 00:05:18.300 18:06:52 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.300 18:06:52 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.300 18:06:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.300 18:06:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.300 18:06:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.300 18:06:52 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.300 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.300 18:06:52 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.300 18:06:52 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.300 18:06:52 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.300 18:06:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:18.300 18:06:52 -- spdk/autotest.sh@32 -- # uname -s 00:05:18.300 18:06:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:18.300 18:06:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:18.300 18:06:52 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:18.300 18:06:52 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:18.300 18:06:52 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:18.300 18:06:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:18.300 18:06:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:18.300 18:06:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:18.300 18:06:52 -- spdk/autotest.sh@48 -- # udevadm_pid=54959 00:05:18.300 18:06:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:18.300 18:06:52 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:18.300 18:06:52 -- pm/common@17 -- # local monitor 00:05:18.301 18:06:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.301 18:06:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:18.301 18:06:52 -- pm/common@21 -- # date +%s 00:05:18.301 18:06:52 -- pm/common@25 -- # sleep 1 00:05:18.301 18:06:52 -- pm/common@21 -- # date +%s 00:05:18.301 18:06:52 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732644412 00:05:18.301 18:06:52 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732644412 00:05:18.301 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732644412_collect-cpu-load.pm.log 00:05:18.301 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732644412_collect-vmstat.pm.log 00:05:19.256 18:06:53 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:19.256 18:06:53 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:19.256 18:06:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.256 18:06:53 -- common/autotest_common.sh@10 -- # set +x 00:05:19.256 18:06:53 -- spdk/autotest.sh@59 -- # create_test_list 00:05:19.256 18:06:53 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:19.256 18:06:53 -- common/autotest_common.sh@10 -- # set +x 00:05:19.256 18:06:53 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:19.256 18:06:53 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:19.256 18:06:53 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:19.256 18:06:53 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:19.256 18:06:53 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:19.256 18:06:53 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:19.256 18:06:53 -- common/autotest_common.sh@1457 -- # uname 00:05:19.256 18:06:53 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:19.256 18:06:53 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:19.256 18:06:53 -- common/autotest_common.sh@1477 -- # uname 00:05:19.256 18:06:53 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:19.256 18:06:53 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:19.256 18:06:53 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:19.514 lcov: LCOV version 1.15 00:05:19.514 18:06:53 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:37.589 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:37.589 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:55.713 18:07:28 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:55.713 18:07:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.713 18:07:28 -- common/autotest_common.sh@10 -- # set +x 00:05:55.713 18:07:28 -- spdk/autotest.sh@78 -- # rm -f 00:05:55.713 18:07:28 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:55.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:55.713 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:55.713 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:55.713 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:55.713 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:55.714 18:07:29 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:55.714 18:07:29 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:55.714 18:07:29 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:55.714 18:07:29 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:55.714 18:07:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:55.714 18:07:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:55.714 18:07:29 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:55.714 18:07:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:55.714 18:07:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:55.714 18:07:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:55.714 18:07:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:55.714 18:07:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:55.714 18:07:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:55.714 18:07:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:55.714 18:07:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:55.714 18:07:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:55.714 18:07:29 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:55.714 18:07:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:55.714 18:07:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:55.714 18:07:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:55.714 18:07:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:05:55.714 18:07:29 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:55.714 18:07:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:55.714 18:07:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:55.714 18:07:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:55.714 18:07:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:05:55.714 18:07:29 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:55.714 18:07:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:55.714 18:07:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:55.714 18:07:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:55.714 18:07:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:05:55.714 18:07:29 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:55.714 18:07:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:55.714 18:07:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:55.714 18:07:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:55.714 18:07:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:55.714 18:07:29 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:55.714 18:07:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:55.714 18:07:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:55.714 18:07:29 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:55.714 18:07:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:55.714 18:07:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:55.714 18:07:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:55.714 18:07:29 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:55.714 18:07:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:55.714 No valid GPT data, bailing 00:05:55.714 18:07:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:55.714 18:07:29 -- scripts/common.sh@394 -- # pt= 00:05:55.714 18:07:29 -- scripts/common.sh@395 -- # return 1 00:05:55.714 18:07:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:55.714 1+0 records in 00:05:55.714 1+0 records out 00:05:55.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130725 s, 80.2 MB/s 00:05:55.714 18:07:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:55.714 18:07:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:55.714 18:07:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:55.714 18:07:29 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:55.714 18:07:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:55.714 No valid GPT data, bailing 00:05:55.714 18:07:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:55.714 18:07:29 -- scripts/common.sh@394 -- # pt= 00:05:55.714 18:07:29 -- scripts/common.sh@395 -- # return 1 00:05:55.714 18:07:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:55.714 1+0 records in 00:05:55.714 1+0 records out 00:05:55.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515205 s, 204 MB/s 00:05:55.714 18:07:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:55.714 18:07:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:55.714 18:07:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:55.714 18:07:29 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:55.714 18:07:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:55.714 No valid GPT data, bailing 00:05:55.714 18:07:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:55.714 18:07:29 -- scripts/common.sh@394 -- # pt= 00:05:55.714 18:07:29 -- scripts/common.sh@395 -- # return 1 00:05:55.714 18:07:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:55.714 1+0 records in 00:05:55.714 1+0 records out 00:05:55.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433689 s, 242 MB/s 00:05:55.714 18:07:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:55.714 18:07:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:55.714 18:07:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:55.714 18:07:29 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:55.714 18:07:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:55.714 No valid GPT data, bailing 00:05:55.714 18:07:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:55.714 18:07:29 -- scripts/common.sh@394 -- # pt= 00:05:55.714 18:07:29 -- scripts/common.sh@395 -- # return 1 00:05:55.714 18:07:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:55.714 1+0 records in 00:05:55.714 1+0 records out 00:05:55.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457049 s, 229 MB/s 00:05:55.714 18:07:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:55.714 18:07:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:55.714 18:07:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:55.714 18:07:29 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:55.714 18:07:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:55.714 No valid GPT data, bailing 00:05:55.714 18:07:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:55.714 18:07:29 -- scripts/common.sh@394 -- # pt= 00:05:55.714 18:07:29 -- scripts/common.sh@395 -- # return 1 00:05:55.714 18:07:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:55.714 1+0 records in 00:05:55.714 1+0 records out 00:05:55.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00423236 s, 248 MB/s 00:05:55.714 18:07:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:55.714 18:07:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:55.714 18:07:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:55.714 18:07:29 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:55.714 18:07:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:55.714 No valid GPT data, bailing 00:05:55.714 18:07:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:55.714 18:07:29 -- scripts/common.sh@394 -- # pt= 00:05:55.714 18:07:29 -- scripts/common.sh@395 -- # return 1 00:05:55.714 18:07:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:55.714 1+0 records in 00:05:55.714 1+0 records out 00:05:55.714 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495184 s, 212 MB/s 00:05:55.714 18:07:29 -- spdk/autotest.sh@105 -- # sync 00:05:55.714 18:07:30 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:55.714 18:07:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:55.714 18:07:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:57.643 18:07:32 -- spdk/autotest.sh@111 -- # uname -s 00:05:57.643 18:07:32 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:57.643 18:07:32 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:57.643 18:07:32 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:58.208 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:58.775 Hugepages 00:05:58.775 node hugesize free / total 00:05:58.775 node0 1048576kB 0 / 0 00:05:58.775 node0 2048kB 0 / 0 00:05:58.775 00:05:58.775 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:58.775 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:58.775 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:58.775 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:59.033 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:59.033 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:59.033 18:07:33 -- spdk/autotest.sh@117 -- # uname -s 00:05:59.033 18:07:33 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:59.033 18:07:33 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:59.033 18:07:33 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:59.600 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:00.167 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.167 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.167 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.167 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:00.167 18:07:34 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:01.541 18:07:35 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:01.541 18:07:35 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:01.541 18:07:35 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:01.541 18:07:35 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:01.541 18:07:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:01.541 18:07:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:01.541 18:07:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:01.541 18:07:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:01.541 18:07:35 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:01.541 18:07:35 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:06:01.541 18:07:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:01.541 18:07:35 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:01.800 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:01.800 Waiting for block devices as requested 00:06:01.800 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:02.058 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:02.058 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:02.058 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:07.329 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:07.329 18:07:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:07.329 18:07:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:07.329 18:07:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:07.329 18:07:41 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:07.329 18:07:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:07.329 18:07:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:07.329 18:07:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:07.329 18:07:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:07.329 18:07:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:07.329 18:07:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:07.329 18:07:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:07.329 18:07:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:07.329 18:07:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:07.329 18:07:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:07.329 18:07:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:07.329 18:07:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:07.329 18:07:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:07.329 18:07:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:07.329 18:07:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:07.329 18:07:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:07.329 18:07:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:07.329 18:07:41 -- common/autotest_common.sh@1543 -- # continue 00:06:07.329 18:07:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:07.329 18:07:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:07.329 18:07:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:07.329 18:07:41 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:07.329 18:07:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:07.329 18:07:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:07.329 18:07:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:07.329 18:07:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:07.329 18:07:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:07.329 18:07:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:07.329 18:07:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:07.329 18:07:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:07.329 18:07:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:07.329 18:07:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:07.329 18:07:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:07.329 18:07:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:07.329 18:07:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:07.329 18:07:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:07.329 18:07:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:07.329 18:07:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:07.329 18:07:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:07.329 18:07:41 -- common/autotest_common.sh@1543 -- # continue 00:06:07.329 18:07:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:07.329 18:07:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:07.329 18:07:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:07.329 18:07:41 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:06:07.329 18:07:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:07.329 18:07:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:07.329 18:07:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:07.329 18:07:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:06:07.329 18:07:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:06:07.329 18:07:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:06:07.329 18:07:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:06:07.329 18:07:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:07.329 18:07:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:07.329 18:07:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:07.329 18:07:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:07.329 18:07:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:07.329 18:07:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:06:07.329 18:07:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:07.329 18:07:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:07.329 18:07:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:07.329 18:07:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:07.329 18:07:41 -- common/autotest_common.sh@1543 -- # continue 00:06:07.329 18:07:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:07.329 18:07:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:07.329 18:07:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:07.329 18:07:41 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:06:07.329 18:07:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:07.329 18:07:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:07.329 18:07:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:07.329 18:07:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:06:07.329 18:07:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:06:07.329 18:07:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:06:07.329 18:07:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:06:07.329 18:07:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:07.329 18:07:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:07.329 18:07:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:07.329 18:07:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:07.329 18:07:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:07.329 18:07:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:06:07.329 18:07:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:07.329 18:07:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:07.329 18:07:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:07.329 18:07:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:07.329 18:07:41 -- common/autotest_common.sh@1543 -- # continue 00:06:07.329 18:07:41 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:07.329 18:07:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:07.329 18:07:41 -- common/autotest_common.sh@10 -- # set +x 00:06:07.329 18:07:41 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:07.329 18:07:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:07.329 18:07:41 -- common/autotest_common.sh@10 -- # set +x 00:06:07.329 18:07:41 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:07.896 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:08.461 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:08.461 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:08.461 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:08.720 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:08.720 18:07:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:08.720 18:07:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:08.720 18:07:43 -- common/autotest_common.sh@10 -- # set +x 00:06:08.720 18:07:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:08.720 18:07:43 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:08.720 18:07:43 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:08.720 18:07:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:08.720 18:07:43 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:08.720 18:07:43 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:08.720 18:07:43 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:08.720 18:07:43 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:08.720 18:07:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:08.720 18:07:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:08.720 18:07:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:08.720 18:07:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:08.720 18:07:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:08.720 18:07:43 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:06:08.720 18:07:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:08.720 18:07:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:08.720 18:07:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:08.720 18:07:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:08.720 18:07:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:08.720 18:07:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:08.720 18:07:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:08.720 18:07:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:08.720 18:07:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:08.720 18:07:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:08.720 18:07:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:08.720 18:07:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:08.720 18:07:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:08.720 18:07:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:08.720 18:07:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:08.720 18:07:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:08.720 18:07:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:08.720 18:07:43 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:08.720 18:07:43 -- common/autotest_common.sh@1572 -- # return 0 00:06:08.720 18:07:43 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:08.720 18:07:43 -- common/autotest_common.sh@1580 -- # return 0 00:06:08.720 18:07:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:08.720 18:07:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:08.720 18:07:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:08.720 18:07:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:08.720 18:07:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:08.720 18:07:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:08.720 18:07:43 -- common/autotest_common.sh@10 -- # set +x 00:06:08.979 18:07:43 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:08.979 18:07:43 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:08.979 18:07:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.979 18:07:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.979 18:07:43 -- common/autotest_common.sh@10 -- # set +x 00:06:08.979 ************************************ 00:06:08.979 START TEST env 00:06:08.979 ************************************ 00:06:08.979 18:07:43 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:08.979 * Looking for test storage... 00:06:08.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:08.979 18:07:43 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.979 18:07:43 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.979 18:07:43 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.979 18:07:43 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.979 18:07:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.979 18:07:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.979 18:07:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.979 18:07:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.979 18:07:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.979 18:07:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.979 18:07:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.979 18:07:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.979 18:07:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.979 18:07:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.979 18:07:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.979 18:07:43 env -- scripts/common.sh@344 -- # case "$op" in 00:06:08.979 18:07:43 env -- scripts/common.sh@345 -- # : 1 00:06:08.979 18:07:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.979 18:07:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.979 18:07:43 env -- scripts/common.sh@365 -- # decimal 1 00:06:08.979 18:07:43 env -- scripts/common.sh@353 -- # local d=1 00:06:08.979 18:07:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.979 18:07:43 env -- scripts/common.sh@355 -- # echo 1 00:06:08.979 18:07:43 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.979 18:07:43 env -- scripts/common.sh@366 -- # decimal 2 00:06:08.979 18:07:43 env -- scripts/common.sh@353 -- # local d=2 00:06:08.979 18:07:43 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.979 18:07:43 env -- scripts/common.sh@355 -- # echo 2 00:06:08.979 18:07:43 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.979 18:07:43 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.979 18:07:43 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.979 18:07:43 env -- scripts/common.sh@368 -- # return 0 00:06:08.979 18:07:43 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.979 18:07:43 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.979 --rc genhtml_branch_coverage=1 00:06:08.979 --rc genhtml_function_coverage=1 00:06:08.979 --rc genhtml_legend=1 00:06:08.979 --rc geninfo_all_blocks=1 00:06:08.979 --rc geninfo_unexecuted_blocks=1 00:06:08.979 00:06:08.979 ' 00:06:08.979 18:07:43 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.979 --rc genhtml_branch_coverage=1 00:06:08.979 --rc genhtml_function_coverage=1 00:06:08.979 --rc genhtml_legend=1 00:06:08.979 --rc geninfo_all_blocks=1 00:06:08.979 --rc geninfo_unexecuted_blocks=1 00:06:08.979 00:06:08.979 ' 00:06:08.979 18:07:43 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.980 --rc genhtml_branch_coverage=1 00:06:08.980 --rc genhtml_function_coverage=1 00:06:08.980 --rc genhtml_legend=1 00:06:08.980 --rc geninfo_all_blocks=1 00:06:08.980 --rc geninfo_unexecuted_blocks=1 00:06:08.980 00:06:08.980 ' 00:06:08.980 18:07:43 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.980 --rc genhtml_branch_coverage=1 00:06:08.980 --rc genhtml_function_coverage=1 00:06:08.980 --rc genhtml_legend=1 00:06:08.980 --rc geninfo_all_blocks=1 00:06:08.980 --rc geninfo_unexecuted_blocks=1 00:06:08.980 00:06:08.980 ' 00:06:08.980 18:07:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:08.980 18:07:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.980 18:07:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.980 18:07:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.980 ************************************ 00:06:08.980 START TEST env_memory 00:06:08.980 ************************************ 00:06:08.980 18:07:43 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:08.980 00:06:08.980 00:06:08.980 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.980 http://cunit.sourceforge.net/ 00:06:08.980 00:06:08.980 00:06:08.980 Suite: memory 00:06:09.239 Test: alloc and free memory map ...[2024-11-26 18:07:43.455532] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:09.239 passed 00:06:09.239 Test: mem map translation ...[2024-11-26 18:07:43.503312] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:09.239 [2024-11-26 18:07:43.503423] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:09.239 [2024-11-26 18:07:43.503503] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:09.239 [2024-11-26 18:07:43.503594] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:09.239 passed 00:06:09.239 Test: mem map registration ...[2024-11-26 18:07:43.581135] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:09.239 [2024-11-26 18:07:43.581264] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:09.239 passed 00:06:09.239 Test: mem map adjacent registrations ...passed 00:06:09.239 00:06:09.239 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.239 suites 1 1 n/a 0 0 00:06:09.239 tests 4 4 4 0 0 00:06:09.239 asserts 152 152 152 0 n/a 00:06:09.239 00:06:09.239 Elapsed time = 0.271 seconds 00:06:09.498 00:06:09.498 real 0m0.309s 00:06:09.498 user 0m0.283s 00:06:09.498 sys 0m0.021s 00:06:09.498 18:07:43 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.498 18:07:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:09.498 ************************************ 00:06:09.498 END TEST env_memory 00:06:09.498 ************************************ 00:06:09.498 18:07:43 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:09.498 18:07:43 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.498 18:07:43 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.498 18:07:43 env -- common/autotest_common.sh@10 -- # set +x 00:06:09.498 ************************************ 00:06:09.498 START TEST env_vtophys 00:06:09.498 ************************************ 00:06:09.498 18:07:43 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:09.498 EAL: lib.eal log level changed from notice to debug 00:06:09.498 EAL: Detected lcore 0 as core 0 on socket 0 00:06:09.498 EAL: Detected lcore 1 as core 0 on socket 0 00:06:09.498 EAL: Detected lcore 2 as core 0 on socket 0 00:06:09.498 EAL: Detected lcore 3 as core 0 on socket 0 00:06:09.498 EAL: Detected lcore 4 as core 0 on socket 0 00:06:09.498 EAL: Detected lcore 5 as core 0 on socket 0 00:06:09.498 EAL: Detected lcore 6 as core 0 on socket 0 00:06:09.498 EAL: Detected lcore 7 as core 0 on socket 0 00:06:09.498 EAL: Detected lcore 8 as core 0 on socket 0 00:06:09.498 EAL: Detected lcore 9 as core 0 on socket 0 00:06:09.498 EAL: Maximum logical cores by configuration: 128 00:06:09.498 EAL: Detected CPU lcores: 10 00:06:09.498 EAL: Detected NUMA nodes: 1 00:06:09.498 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:09.498 EAL: Detected shared linkage of DPDK 00:06:09.498 EAL: No shared files mode enabled, IPC will be disabled 00:06:09.498 EAL: Selected IOVA mode 'PA' 00:06:09.498 EAL: Probing VFIO support... 00:06:09.498 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:09.498 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:09.498 EAL: Ask a virtual area of 0x2e000 bytes 00:06:09.498 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:09.498 EAL: Setting up physically contiguous memory... 00:06:09.498 EAL: Setting maximum number of open files to 524288 00:06:09.498 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:09.498 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:09.498 EAL: Ask a virtual area of 0x61000 bytes 00:06:09.498 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:09.498 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:09.498 EAL: Ask a virtual area of 0x400000000 bytes 00:06:09.498 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:09.498 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:09.498 EAL: Ask a virtual area of 0x61000 bytes 00:06:09.498 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:09.498 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:09.498 EAL: Ask a virtual area of 0x400000000 bytes 00:06:09.498 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:09.498 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:09.498 EAL: Ask a virtual area of 0x61000 bytes 00:06:09.498 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:09.498 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:09.498 EAL: Ask a virtual area of 0x400000000 bytes 00:06:09.498 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:09.498 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:09.498 EAL: Ask a virtual area of 0x61000 bytes 00:06:09.498 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:09.498 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:09.498 EAL: Ask a virtual area of 0x400000000 bytes 00:06:09.498 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:09.498 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:09.498 EAL: Hugepages will be freed exactly as allocated. 00:06:09.498 EAL: No shared files mode enabled, IPC is disabled 00:06:09.498 EAL: No shared files mode enabled, IPC is disabled 00:06:09.757 EAL: TSC frequency is ~2200000 KHz 00:06:09.757 EAL: Main lcore 0 is ready (tid=7f12864a5a40;cpuset=[0]) 00:06:09.757 EAL: Trying to obtain current memory policy. 00:06:09.757 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:09.757 EAL: Restoring previous memory policy: 0 00:06:09.757 EAL: request: mp_malloc_sync 00:06:09.757 EAL: No shared files mode enabled, IPC is disabled 00:06:09.757 EAL: Heap on socket 0 was expanded by 2MB 00:06:09.757 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:09.757 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:09.757 EAL: Mem event callback 'spdk:(nil)' registered 00:06:09.757 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:09.757 00:06:09.757 00:06:09.757 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.757 http://cunit.sourceforge.net/ 00:06:09.757 00:06:09.757 00:06:09.757 Suite: components_suite 00:06:10.324 Test: vtophys_malloc_test ...passed 00:06:10.324 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:10.324 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.324 EAL: Restoring previous memory policy: 4 00:06:10.324 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.324 EAL: request: mp_malloc_sync 00:06:10.324 EAL: No shared files mode enabled, IPC is disabled 00:06:10.324 EAL: Heap on socket 0 was expanded by 4MB 00:06:10.324 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.324 EAL: request: mp_malloc_sync 00:06:10.324 EAL: No shared files mode enabled, IPC is disabled 00:06:10.324 EAL: Heap on socket 0 was shrunk by 4MB 00:06:10.324 EAL: Trying to obtain current memory policy. 00:06:10.324 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.324 EAL: Restoring previous memory policy: 4 00:06:10.324 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.324 EAL: request: mp_malloc_sync 00:06:10.324 EAL: No shared files mode enabled, IPC is disabled 00:06:10.324 EAL: Heap on socket 0 was expanded by 6MB 00:06:10.324 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.324 EAL: request: mp_malloc_sync 00:06:10.324 EAL: No shared files mode enabled, IPC is disabled 00:06:10.324 EAL: Heap on socket 0 was shrunk by 6MB 00:06:10.324 EAL: Trying to obtain current memory policy. 00:06:10.324 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.324 EAL: Restoring previous memory policy: 4 00:06:10.324 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.324 EAL: request: mp_malloc_sync 00:06:10.324 EAL: No shared files mode enabled, IPC is disabled 00:06:10.324 EAL: Heap on socket 0 was expanded by 10MB 00:06:10.324 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.324 EAL: request: mp_malloc_sync 00:06:10.324 EAL: No shared files mode enabled, IPC is disabled 00:06:10.324 EAL: Heap on socket 0 was shrunk by 10MB 00:06:10.324 EAL: Trying to obtain current memory policy. 00:06:10.324 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.324 EAL: Restoring previous memory policy: 4 00:06:10.324 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.324 EAL: request: mp_malloc_sync 00:06:10.324 EAL: No shared files mode enabled, IPC is disabled 00:06:10.324 EAL: Heap on socket 0 was expanded by 18MB 00:06:10.324 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.324 EAL: request: mp_malloc_sync 00:06:10.324 EAL: No shared files mode enabled, IPC is disabled 00:06:10.324 EAL: Heap on socket 0 was shrunk by 18MB 00:06:10.324 EAL: Trying to obtain current memory policy. 00:06:10.324 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.324 EAL: Restoring previous memory policy: 4 00:06:10.324 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.324 EAL: request: mp_malloc_sync 00:06:10.324 EAL: No shared files mode enabled, IPC is disabled 00:06:10.324 EAL: Heap on socket 0 was expanded by 34MB 00:06:10.324 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.324 EAL: request: mp_malloc_sync 00:06:10.324 EAL: No shared files mode enabled, IPC is disabled 00:06:10.324 EAL: Heap on socket 0 was shrunk by 34MB 00:06:10.324 EAL: Trying to obtain current memory policy. 00:06:10.324 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.324 EAL: Restoring previous memory policy: 4 00:06:10.324 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.324 EAL: request: mp_malloc_sync 00:06:10.324 EAL: No shared files mode enabled, IPC is disabled 00:06:10.324 EAL: Heap on socket 0 was expanded by 66MB 00:06:10.583 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.583 EAL: request: mp_malloc_sync 00:06:10.583 EAL: No shared files mode enabled, IPC is disabled 00:06:10.583 EAL: Heap on socket 0 was shrunk by 66MB 00:06:10.583 EAL: Trying to obtain current memory policy. 00:06:10.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.583 EAL: Restoring previous memory policy: 4 00:06:10.583 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.583 EAL: request: mp_malloc_sync 00:06:10.583 EAL: No shared files mode enabled, IPC is disabled 00:06:10.583 EAL: Heap on socket 0 was expanded by 130MB 00:06:10.841 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.841 EAL: request: mp_malloc_sync 00:06:10.841 EAL: No shared files mode enabled, IPC is disabled 00:06:10.841 EAL: Heap on socket 0 was shrunk by 130MB 00:06:11.100 EAL: Trying to obtain current memory policy. 00:06:11.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:11.100 EAL: Restoring previous memory policy: 4 00:06:11.100 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.100 EAL: request: mp_malloc_sync 00:06:11.100 EAL: No shared files mode enabled, IPC is disabled 00:06:11.100 EAL: Heap on socket 0 was expanded by 258MB 00:06:11.667 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.667 EAL: request: mp_malloc_sync 00:06:11.667 EAL: No shared files mode enabled, IPC is disabled 00:06:11.667 EAL: Heap on socket 0 was shrunk by 258MB 00:06:12.234 EAL: Trying to obtain current memory policy. 00:06:12.234 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:12.234 EAL: Restoring previous memory policy: 4 00:06:12.234 EAL: Calling mem event callback 'spdk:(nil)' 00:06:12.234 EAL: request: mp_malloc_sync 00:06:12.234 EAL: No shared files mode enabled, IPC is disabled 00:06:12.234 EAL: Heap on socket 0 was expanded by 514MB 00:06:13.169 EAL: Calling mem event callback 'spdk:(nil)' 00:06:13.169 EAL: request: mp_malloc_sync 00:06:13.169 EAL: No shared files mode enabled, IPC is disabled 00:06:13.169 EAL: Heap on socket 0 was shrunk by 514MB 00:06:14.105 EAL: Trying to obtain current memory policy. 00:06:14.105 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:14.363 EAL: Restoring previous memory policy: 4 00:06:14.363 EAL: Calling mem event callback 'spdk:(nil)' 00:06:14.363 EAL: request: mp_malloc_sync 00:06:14.363 EAL: No shared files mode enabled, IPC is disabled 00:06:14.363 EAL: Heap on socket 0 was expanded by 1026MB 00:06:16.267 EAL: Calling mem event callback 'spdk:(nil)' 00:06:16.267 EAL: request: mp_malloc_sync 00:06:16.267 EAL: No shared files mode enabled, IPC is disabled 00:06:16.267 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:17.655 passed 00:06:17.655 00:06:17.655 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.655 suites 1 1 n/a 0 0 00:06:17.655 tests 2 2 2 0 0 00:06:17.655 asserts 5607 5607 5607 0 n/a 00:06:17.655 00:06:17.655 Elapsed time = 7.809 seconds 00:06:17.655 EAL: Calling mem event callback 'spdk:(nil)' 00:06:17.655 EAL: request: mp_malloc_sync 00:06:17.655 EAL: No shared files mode enabled, IPC is disabled 00:06:17.655 EAL: Heap on socket 0 was shrunk by 2MB 00:06:17.655 EAL: No shared files mode enabled, IPC is disabled 00:06:17.655 EAL: No shared files mode enabled, IPC is disabled 00:06:17.655 EAL: No shared files mode enabled, IPC is disabled 00:06:17.655 00:06:17.655 real 0m8.209s 00:06:17.655 user 0m6.893s 00:06:17.655 sys 0m1.145s 00:06:17.655 18:07:51 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.655 18:07:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:17.655 ************************************ 00:06:17.655 END TEST env_vtophys 00:06:17.655 ************************************ 00:06:17.655 18:07:52 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:17.655 18:07:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.655 18:07:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.655 18:07:52 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.655 ************************************ 00:06:17.655 START TEST env_pci 00:06:17.655 ************************************ 00:06:17.655 18:07:52 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:17.655 00:06:17.655 00:06:17.655 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.655 http://cunit.sourceforge.net/ 00:06:17.655 00:06:17.655 00:06:17.655 Suite: pci 00:06:17.655 Test: pci_hook ...[2024-11-26 18:07:52.046849] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57817 has claimed it 00:06:17.655 EAL: Cannot find device (10000:00:01.0) 00:06:17.655 EAL: Failed to attach device on primary process 00:06:17.655 passed 00:06:17.655 00:06:17.655 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.655 suites 1 1 n/a 0 0 00:06:17.655 tests 1 1 1 0 0 00:06:17.655 asserts 25 25 25 0 n/a 00:06:17.655 00:06:17.655 Elapsed time = 0.009 seconds 00:06:17.655 00:06:17.655 real 0m0.081s 00:06:17.655 user 0m0.041s 00:06:17.655 sys 0m0.040s 00:06:17.655 18:07:52 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.655 18:07:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:17.655 ************************************ 00:06:17.655 END TEST env_pci 00:06:17.655 ************************************ 00:06:17.913 18:07:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:17.913 18:07:52 env -- env/env.sh@15 -- # uname 00:06:17.913 18:07:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:17.913 18:07:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:17.913 18:07:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:17.913 18:07:52 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:17.913 18:07:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.913 18:07:52 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.913 ************************************ 00:06:17.913 START TEST env_dpdk_post_init 00:06:17.913 ************************************ 00:06:17.913 18:07:52 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:17.913 EAL: Detected CPU lcores: 10 00:06:17.913 EAL: Detected NUMA nodes: 1 00:06:17.913 EAL: Detected shared linkage of DPDK 00:06:17.913 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:17.913 EAL: Selected IOVA mode 'PA' 00:06:18.198 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:18.198 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:18.198 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:18.198 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:18.198 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:18.198 Starting DPDK initialization... 00:06:18.198 Starting SPDK post initialization... 00:06:18.198 SPDK NVMe probe 00:06:18.198 Attaching to 0000:00:10.0 00:06:18.198 Attaching to 0000:00:11.0 00:06:18.198 Attaching to 0000:00:12.0 00:06:18.198 Attaching to 0000:00:13.0 00:06:18.198 Attached to 0000:00:10.0 00:06:18.198 Attached to 0000:00:11.0 00:06:18.198 Attached to 0000:00:13.0 00:06:18.198 Attached to 0000:00:12.0 00:06:18.198 Cleaning up... 00:06:18.198 00:06:18.198 real 0m0.368s 00:06:18.198 user 0m0.147s 00:06:18.198 sys 0m0.121s 00:06:18.198 18:07:52 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.198 18:07:52 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:18.198 ************************************ 00:06:18.198 END TEST env_dpdk_post_init 00:06:18.198 ************************************ 00:06:18.198 18:07:52 env -- env/env.sh@26 -- # uname 00:06:18.198 18:07:52 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:18.198 18:07:52 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:18.198 18:07:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.198 18:07:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.198 18:07:52 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.198 ************************************ 00:06:18.198 START TEST env_mem_callbacks 00:06:18.198 ************************************ 00:06:18.198 18:07:52 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:18.198 EAL: Detected CPU lcores: 10 00:06:18.198 EAL: Detected NUMA nodes: 1 00:06:18.198 EAL: Detected shared linkage of DPDK 00:06:18.456 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:18.456 EAL: Selected IOVA mode 'PA' 00:06:18.456 00:06:18.456 00:06:18.456 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.456 http://cunit.sourceforge.net/ 00:06:18.456 00:06:18.456 00:06:18.456 Suite: memory 00:06:18.456 Test: test ... 00:06:18.456 register 0x200000200000 2097152 00:06:18.456 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:18.456 malloc 3145728 00:06:18.456 register 0x200000400000 4194304 00:06:18.456 buf 0x2000004fffc0 len 3145728 PASSED 00:06:18.456 malloc 64 00:06:18.456 buf 0x2000004ffec0 len 64 PASSED 00:06:18.456 malloc 4194304 00:06:18.456 register 0x200000800000 6291456 00:06:18.456 buf 0x2000009fffc0 len 4194304 PASSED 00:06:18.456 free 0x2000004fffc0 3145728 00:06:18.456 free 0x2000004ffec0 64 00:06:18.456 unregister 0x200000400000 4194304 PASSED 00:06:18.456 free 0x2000009fffc0 4194304 00:06:18.456 unregister 0x200000800000 6291456 PASSED 00:06:18.456 malloc 8388608 00:06:18.456 register 0x200000400000 10485760 00:06:18.456 buf 0x2000005fffc0 len 8388608 PASSED 00:06:18.456 free 0x2000005fffc0 8388608 00:06:18.456 unregister 0x200000400000 10485760 PASSED 00:06:18.456 passed 00:06:18.456 00:06:18.456 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.456 suites 1 1 n/a 0 0 00:06:18.456 tests 1 1 1 0 0 00:06:18.456 asserts 15 15 15 0 n/a 00:06:18.456 00:06:18.456 Elapsed time = 0.082 seconds 00:06:18.456 00:06:18.456 real 0m0.322s 00:06:18.456 user 0m0.127s 00:06:18.456 sys 0m0.092s 00:06:18.456 18:07:52 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.456 ************************************ 00:06:18.456 END TEST env_mem_callbacks 00:06:18.456 ************************************ 00:06:18.456 18:07:52 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:18.713 00:06:18.713 real 0m9.753s 00:06:18.713 user 0m7.702s 00:06:18.713 sys 0m1.662s 00:06:18.713 18:07:52 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.713 18:07:52 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.713 ************************************ 00:06:18.713 END TEST env 00:06:18.713 ************************************ 00:06:18.713 18:07:52 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:18.713 18:07:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.713 18:07:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.713 18:07:52 -- common/autotest_common.sh@10 -- # set +x 00:06:18.713 ************************************ 00:06:18.713 START TEST rpc 00:06:18.713 ************************************ 00:06:18.713 18:07:52 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:18.713 * Looking for test storage... 00:06:18.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:18.713 18:07:53 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.713 18:07:53 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.713 18:07:53 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.971 18:07:53 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.971 18:07:53 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.971 18:07:53 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.971 18:07:53 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.971 18:07:53 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.971 18:07:53 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.971 18:07:53 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.971 18:07:53 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.971 18:07:53 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.971 18:07:53 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.971 18:07:53 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.971 18:07:53 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.971 18:07:53 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:18.971 18:07:53 rpc -- scripts/common.sh@345 -- # : 1 00:06:18.971 18:07:53 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.971 18:07:53 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.971 18:07:53 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:18.971 18:07:53 rpc -- scripts/common.sh@353 -- # local d=1 00:06:18.971 18:07:53 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.971 18:07:53 rpc -- scripts/common.sh@355 -- # echo 1 00:06:18.971 18:07:53 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.971 18:07:53 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:18.971 18:07:53 rpc -- scripts/common.sh@353 -- # local d=2 00:06:18.971 18:07:53 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.971 18:07:53 rpc -- scripts/common.sh@355 -- # echo 2 00:06:18.971 18:07:53 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.971 18:07:53 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.971 18:07:53 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.971 18:07:53 rpc -- scripts/common.sh@368 -- # return 0 00:06:18.971 18:07:53 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.971 18:07:53 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.971 --rc genhtml_branch_coverage=1 00:06:18.971 --rc genhtml_function_coverage=1 00:06:18.971 --rc genhtml_legend=1 00:06:18.971 --rc geninfo_all_blocks=1 00:06:18.971 --rc geninfo_unexecuted_blocks=1 00:06:18.971 00:06:18.971 ' 00:06:18.971 18:07:53 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.971 --rc genhtml_branch_coverage=1 00:06:18.971 --rc genhtml_function_coverage=1 00:06:18.971 --rc genhtml_legend=1 00:06:18.971 --rc geninfo_all_blocks=1 00:06:18.971 --rc geninfo_unexecuted_blocks=1 00:06:18.971 00:06:18.971 ' 00:06:18.971 18:07:53 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.971 --rc genhtml_branch_coverage=1 00:06:18.971 --rc genhtml_function_coverage=1 00:06:18.971 --rc genhtml_legend=1 00:06:18.971 --rc geninfo_all_blocks=1 00:06:18.971 --rc geninfo_unexecuted_blocks=1 00:06:18.971 00:06:18.971 ' 00:06:18.971 18:07:53 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.971 --rc genhtml_branch_coverage=1 00:06:18.971 --rc genhtml_function_coverage=1 00:06:18.971 --rc genhtml_legend=1 00:06:18.971 --rc geninfo_all_blocks=1 00:06:18.971 --rc geninfo_unexecuted_blocks=1 00:06:18.971 00:06:18.971 ' 00:06:18.971 18:07:53 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57944 00:06:18.971 18:07:53 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:18.971 18:07:53 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.971 18:07:53 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57944 00:06:18.971 18:07:53 rpc -- common/autotest_common.sh@835 -- # '[' -z 57944 ']' 00:06:18.971 18:07:53 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.971 18:07:53 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.971 18:07:53 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.971 18:07:53 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.971 18:07:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.971 [2024-11-26 18:07:53.324878] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:06:18.971 [2024-11-26 18:07:53.325079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57944 ] 00:06:19.228 [2024-11-26 18:07:53.513435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.228 [2024-11-26 18:07:53.648930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:19.228 [2024-11-26 18:07:53.649022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57944' to capture a snapshot of events at runtime. 00:06:19.228 [2024-11-26 18:07:53.649041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.228 [2024-11-26 18:07:53.649062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.228 [2024-11-26 18:07:53.649074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57944 for offline analysis/debug. 00:06:19.228 [2024-11-26 18:07:53.650416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.163 18:07:54 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.163 18:07:54 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:20.163 18:07:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:20.163 18:07:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:20.163 18:07:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:20.163 18:07:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:20.163 18:07:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.163 18:07:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.163 18:07:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.163 ************************************ 00:06:20.163 START TEST rpc_integrity 00:06:20.163 ************************************ 00:06:20.163 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:20.163 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:20.163 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.163 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.163 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.163 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:20.163 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:20.422 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:20.422 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.422 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:20.422 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.422 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:20.422 { 00:06:20.422 "name": "Malloc0", 00:06:20.422 "aliases": [ 00:06:20.422 "3fd40d2e-f78b-4546-be69-3c65b72448a7" 00:06:20.422 ], 00:06:20.422 "product_name": "Malloc disk", 00:06:20.422 "block_size": 512, 00:06:20.422 "num_blocks": 16384, 00:06:20.422 "uuid": "3fd40d2e-f78b-4546-be69-3c65b72448a7", 00:06:20.422 "assigned_rate_limits": { 00:06:20.422 "rw_ios_per_sec": 0, 00:06:20.422 "rw_mbytes_per_sec": 0, 00:06:20.422 "r_mbytes_per_sec": 0, 00:06:20.422 "w_mbytes_per_sec": 0 00:06:20.422 }, 00:06:20.422 "claimed": false, 00:06:20.422 "zoned": false, 00:06:20.422 "supported_io_types": { 00:06:20.422 "read": true, 00:06:20.422 "write": true, 00:06:20.422 "unmap": true, 00:06:20.422 "flush": true, 00:06:20.422 "reset": true, 00:06:20.422 "nvme_admin": false, 00:06:20.422 "nvme_io": false, 00:06:20.422 "nvme_io_md": false, 00:06:20.422 "write_zeroes": true, 00:06:20.422 "zcopy": true, 00:06:20.422 "get_zone_info": false, 00:06:20.422 "zone_management": false, 00:06:20.422 "zone_append": false, 00:06:20.422 "compare": false, 00:06:20.422 "compare_and_write": false, 00:06:20.422 "abort": true, 00:06:20.422 "seek_hole": false, 00:06:20.422 "seek_data": false, 00:06:20.422 "copy": true, 00:06:20.422 "nvme_iov_md": false 00:06:20.422 }, 00:06:20.422 "memory_domains": [ 00:06:20.422 { 00:06:20.422 "dma_device_id": "system", 00:06:20.422 "dma_device_type": 1 00:06:20.422 }, 00:06:20.422 { 00:06:20.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.422 "dma_device_type": 2 00:06:20.422 } 00:06:20.422 ], 00:06:20.422 "driver_specific": {} 00:06:20.422 } 00:06:20.422 ]' 00:06:20.422 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:20.422 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:20.422 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.422 [2024-11-26 18:07:54.727036] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:20.422 [2024-11-26 18:07:54.727129] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:20.422 [2024-11-26 18:07:54.727179] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:20.422 [2024-11-26 18:07:54.727200] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:20.422 [2024-11-26 18:07:54.730355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:20.422 [2024-11-26 18:07:54.730416] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:20.422 Passthru0 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.422 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.422 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:20.422 { 00:06:20.422 "name": "Malloc0", 00:06:20.422 "aliases": [ 00:06:20.422 "3fd40d2e-f78b-4546-be69-3c65b72448a7" 00:06:20.422 ], 00:06:20.422 "product_name": "Malloc disk", 00:06:20.422 "block_size": 512, 00:06:20.422 "num_blocks": 16384, 00:06:20.422 "uuid": "3fd40d2e-f78b-4546-be69-3c65b72448a7", 00:06:20.422 "assigned_rate_limits": { 00:06:20.422 "rw_ios_per_sec": 0, 00:06:20.422 "rw_mbytes_per_sec": 0, 00:06:20.422 "r_mbytes_per_sec": 0, 00:06:20.422 "w_mbytes_per_sec": 0 00:06:20.422 }, 00:06:20.422 "claimed": true, 00:06:20.422 "claim_type": "exclusive_write", 00:06:20.422 "zoned": false, 00:06:20.422 "supported_io_types": { 00:06:20.422 "read": true, 00:06:20.422 "write": true, 00:06:20.422 "unmap": true, 00:06:20.422 "flush": true, 00:06:20.422 "reset": true, 00:06:20.422 "nvme_admin": false, 00:06:20.422 "nvme_io": false, 00:06:20.422 "nvme_io_md": false, 00:06:20.422 "write_zeroes": true, 00:06:20.422 "zcopy": true, 00:06:20.422 "get_zone_info": false, 00:06:20.422 "zone_management": false, 00:06:20.422 "zone_append": false, 00:06:20.422 "compare": false, 00:06:20.422 "compare_and_write": false, 00:06:20.422 "abort": true, 00:06:20.422 "seek_hole": false, 00:06:20.422 "seek_data": false, 00:06:20.422 "copy": true, 00:06:20.422 "nvme_iov_md": false 00:06:20.422 }, 00:06:20.422 "memory_domains": [ 00:06:20.422 { 00:06:20.422 "dma_device_id": "system", 00:06:20.422 "dma_device_type": 1 00:06:20.422 }, 00:06:20.422 { 00:06:20.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.422 "dma_device_type": 2 00:06:20.422 } 00:06:20.422 ], 00:06:20.422 "driver_specific": {} 00:06:20.422 }, 00:06:20.422 { 00:06:20.422 "name": "Passthru0", 00:06:20.422 "aliases": [ 00:06:20.422 "0df6e44f-f0f1-581a-b105-64b437b5bfd8" 00:06:20.422 ], 00:06:20.422 "product_name": "passthru", 00:06:20.422 "block_size": 512, 00:06:20.422 "num_blocks": 16384, 00:06:20.422 "uuid": "0df6e44f-f0f1-581a-b105-64b437b5bfd8", 00:06:20.422 "assigned_rate_limits": { 00:06:20.422 "rw_ios_per_sec": 0, 00:06:20.422 "rw_mbytes_per_sec": 0, 00:06:20.422 "r_mbytes_per_sec": 0, 00:06:20.422 "w_mbytes_per_sec": 0 00:06:20.422 }, 00:06:20.422 "claimed": false, 00:06:20.422 "zoned": false, 00:06:20.422 "supported_io_types": { 00:06:20.422 "read": true, 00:06:20.422 "write": true, 00:06:20.422 "unmap": true, 00:06:20.422 "flush": true, 00:06:20.422 "reset": true, 00:06:20.422 "nvme_admin": false, 00:06:20.422 "nvme_io": false, 00:06:20.422 "nvme_io_md": false, 00:06:20.422 "write_zeroes": true, 00:06:20.422 "zcopy": true, 00:06:20.422 "get_zone_info": false, 00:06:20.422 "zone_management": false, 00:06:20.422 "zone_append": false, 00:06:20.422 "compare": false, 00:06:20.422 "compare_and_write": false, 00:06:20.422 "abort": true, 00:06:20.422 "seek_hole": false, 00:06:20.422 "seek_data": false, 00:06:20.422 "copy": true, 00:06:20.422 "nvme_iov_md": false 00:06:20.422 }, 00:06:20.422 "memory_domains": [ 00:06:20.422 { 00:06:20.422 "dma_device_id": "system", 00:06:20.422 "dma_device_type": 1 00:06:20.422 }, 00:06:20.422 { 00:06:20.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.422 "dma_device_type": 2 00:06:20.422 } 00:06:20.422 ], 00:06:20.422 "driver_specific": { 00:06:20.422 "passthru": { 00:06:20.422 "name": "Passthru0", 00:06:20.422 "base_bdev_name": "Malloc0" 00:06:20.422 } 00:06:20.422 } 00:06:20.422 } 00:06:20.422 ]' 00:06:20.422 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:20.422 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:20.422 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.422 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.422 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.422 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:20.423 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.423 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.423 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.423 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:20.423 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:20.706 ************************************ 00:06:20.706 END TEST rpc_integrity 00:06:20.706 ************************************ 00:06:20.706 18:07:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:20.706 00:06:20.706 real 0m0.367s 00:06:20.706 user 0m0.232s 00:06:20.706 sys 0m0.041s 00:06:20.706 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.706 18:07:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.706 18:07:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:20.706 18:07:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.706 18:07:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.706 18:07:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.706 ************************************ 00:06:20.706 START TEST rpc_plugins 00:06:20.706 ************************************ 00:06:20.706 18:07:54 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:20.706 18:07:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:20.706 18:07:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.706 18:07:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.706 18:07:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.706 18:07:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:20.706 18:07:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:20.706 18:07:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.706 18:07:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.706 18:07:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.706 18:07:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:20.706 { 00:06:20.706 "name": "Malloc1", 00:06:20.706 "aliases": [ 00:06:20.706 "617e6cf2-945f-4386-8db2-35bc91dfde47" 00:06:20.706 ], 00:06:20.706 "product_name": "Malloc disk", 00:06:20.706 "block_size": 4096, 00:06:20.706 "num_blocks": 256, 00:06:20.706 "uuid": "617e6cf2-945f-4386-8db2-35bc91dfde47", 00:06:20.706 "assigned_rate_limits": { 00:06:20.706 "rw_ios_per_sec": 0, 00:06:20.706 "rw_mbytes_per_sec": 0, 00:06:20.706 "r_mbytes_per_sec": 0, 00:06:20.706 "w_mbytes_per_sec": 0 00:06:20.706 }, 00:06:20.706 "claimed": false, 00:06:20.706 "zoned": false, 00:06:20.706 "supported_io_types": { 00:06:20.706 "read": true, 00:06:20.706 "write": true, 00:06:20.706 "unmap": true, 00:06:20.706 "flush": true, 00:06:20.706 "reset": true, 00:06:20.706 "nvme_admin": false, 00:06:20.706 "nvme_io": false, 00:06:20.706 "nvme_io_md": false, 00:06:20.706 "write_zeroes": true, 00:06:20.706 "zcopy": true, 00:06:20.706 "get_zone_info": false, 00:06:20.706 "zone_management": false, 00:06:20.706 "zone_append": false, 00:06:20.706 "compare": false, 00:06:20.706 "compare_and_write": false, 00:06:20.706 "abort": true, 00:06:20.706 "seek_hole": false, 00:06:20.706 "seek_data": false, 00:06:20.706 "copy": true, 00:06:20.706 "nvme_iov_md": false 00:06:20.706 }, 00:06:20.706 "memory_domains": [ 00:06:20.706 { 00:06:20.706 "dma_device_id": "system", 00:06:20.706 "dma_device_type": 1 00:06:20.706 }, 00:06:20.706 { 00:06:20.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.706 "dma_device_type": 2 00:06:20.706 } 00:06:20.706 ], 00:06:20.706 "driver_specific": {} 00:06:20.706 } 00:06:20.706 ]' 00:06:20.706 18:07:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:20.706 18:07:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:20.706 18:07:55 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:20.706 18:07:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.706 18:07:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.706 18:07:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.706 18:07:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:20.706 18:07:55 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.706 18:07:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.706 18:07:55 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.706 18:07:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:20.706 18:07:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:20.706 ************************************ 00:06:20.706 END TEST rpc_plugins 00:06:20.707 ************************************ 00:06:20.707 18:07:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:20.707 00:06:20.707 real 0m0.164s 00:06:20.707 user 0m0.098s 00:06:20.707 sys 0m0.025s 00:06:20.707 18:07:55 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.707 18:07:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.995 18:07:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:20.995 18:07:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.995 18:07:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.995 18:07:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.995 ************************************ 00:06:20.995 START TEST rpc_trace_cmd_test 00:06:20.995 ************************************ 00:06:20.995 18:07:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:20.995 18:07:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:20.995 18:07:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:20.995 18:07:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.995 18:07:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.995 18:07:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.995 18:07:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:20.995 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57944", 00:06:20.995 "tpoint_group_mask": "0x8", 00:06:20.995 "iscsi_conn": { 00:06:20.995 "mask": "0x2", 00:06:20.995 "tpoint_mask": "0x0" 00:06:20.995 }, 00:06:20.995 "scsi": { 00:06:20.995 "mask": "0x4", 00:06:20.995 "tpoint_mask": "0x0" 00:06:20.995 }, 00:06:20.995 "bdev": { 00:06:20.995 "mask": "0x8", 00:06:20.995 "tpoint_mask": "0xffffffffffffffff" 00:06:20.995 }, 00:06:20.995 "nvmf_rdma": { 00:06:20.995 "mask": "0x10", 00:06:20.995 "tpoint_mask": "0x0" 00:06:20.995 }, 00:06:20.995 "nvmf_tcp": { 00:06:20.995 "mask": "0x20", 00:06:20.995 "tpoint_mask": "0x0" 00:06:20.995 }, 00:06:20.995 "ftl": { 00:06:20.995 "mask": "0x40", 00:06:20.995 "tpoint_mask": "0x0" 00:06:20.995 }, 00:06:20.995 "blobfs": { 00:06:20.995 "mask": "0x80", 00:06:20.995 "tpoint_mask": "0x0" 00:06:20.995 }, 00:06:20.995 "dsa": { 00:06:20.995 "mask": "0x200", 00:06:20.995 "tpoint_mask": "0x0" 00:06:20.995 }, 00:06:20.995 "thread": { 00:06:20.995 "mask": "0x400", 00:06:20.995 "tpoint_mask": "0x0" 00:06:20.995 }, 00:06:20.995 "nvme_pcie": { 00:06:20.995 "mask": "0x800", 00:06:20.995 "tpoint_mask": "0x0" 00:06:20.995 }, 00:06:20.995 "iaa": { 00:06:20.995 "mask": "0x1000", 00:06:20.995 "tpoint_mask": "0x0" 00:06:20.995 }, 00:06:20.995 "nvme_tcp": { 00:06:20.995 "mask": "0x2000", 00:06:20.995 "tpoint_mask": "0x0" 00:06:20.995 }, 00:06:20.995 "bdev_nvme": { 00:06:20.995 "mask": "0x4000", 00:06:20.995 "tpoint_mask": "0x0" 00:06:20.995 }, 00:06:20.995 "sock": { 00:06:20.995 "mask": "0x8000", 00:06:20.995 "tpoint_mask": "0x0" 00:06:20.995 }, 00:06:20.995 "blob": { 00:06:20.995 "mask": "0x10000", 00:06:20.995 "tpoint_mask": "0x0" 00:06:20.995 }, 00:06:20.995 "bdev_raid": { 00:06:20.996 "mask": "0x20000", 00:06:20.996 "tpoint_mask": "0x0" 00:06:20.996 }, 00:06:20.996 "scheduler": { 00:06:20.996 "mask": "0x40000", 00:06:20.996 "tpoint_mask": "0x0" 00:06:20.996 } 00:06:20.996 }' 00:06:20.996 18:07:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:20.996 18:07:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:20.996 18:07:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:20.996 18:07:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:20.996 18:07:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:20.996 18:07:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:20.996 18:07:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:20.996 18:07:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:20.996 18:07:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:21.254 ************************************ 00:06:21.254 END TEST rpc_trace_cmd_test 00:06:21.254 ************************************ 00:06:21.254 18:07:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:21.254 00:06:21.254 real 0m0.272s 00:06:21.254 user 0m0.240s 00:06:21.254 sys 0m0.024s 00:06:21.254 18:07:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.254 18:07:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.254 18:07:55 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:21.254 18:07:55 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:21.254 18:07:55 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:21.254 18:07:55 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.254 18:07:55 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.254 18:07:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.254 ************************************ 00:06:21.254 START TEST rpc_daemon_integrity 00:06:21.254 ************************************ 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:21.254 { 00:06:21.254 "name": "Malloc2", 00:06:21.254 "aliases": [ 00:06:21.254 "2ff7ae86-70bd-48c2-b58e-d0a252aa4517" 00:06:21.254 ], 00:06:21.254 "product_name": "Malloc disk", 00:06:21.254 "block_size": 512, 00:06:21.254 "num_blocks": 16384, 00:06:21.254 "uuid": "2ff7ae86-70bd-48c2-b58e-d0a252aa4517", 00:06:21.254 "assigned_rate_limits": { 00:06:21.254 "rw_ios_per_sec": 0, 00:06:21.254 "rw_mbytes_per_sec": 0, 00:06:21.254 "r_mbytes_per_sec": 0, 00:06:21.254 "w_mbytes_per_sec": 0 00:06:21.254 }, 00:06:21.254 "claimed": false, 00:06:21.254 "zoned": false, 00:06:21.254 "supported_io_types": { 00:06:21.254 "read": true, 00:06:21.254 "write": true, 00:06:21.254 "unmap": true, 00:06:21.254 "flush": true, 00:06:21.254 "reset": true, 00:06:21.254 "nvme_admin": false, 00:06:21.254 "nvme_io": false, 00:06:21.254 "nvme_io_md": false, 00:06:21.254 "write_zeroes": true, 00:06:21.254 "zcopy": true, 00:06:21.254 "get_zone_info": false, 00:06:21.254 "zone_management": false, 00:06:21.254 "zone_append": false, 00:06:21.254 "compare": false, 00:06:21.254 "compare_and_write": false, 00:06:21.254 "abort": true, 00:06:21.254 "seek_hole": false, 00:06:21.254 "seek_data": false, 00:06:21.254 "copy": true, 00:06:21.254 "nvme_iov_md": false 00:06:21.254 }, 00:06:21.254 "memory_domains": [ 00:06:21.254 { 00:06:21.254 "dma_device_id": "system", 00:06:21.254 "dma_device_type": 1 00:06:21.254 }, 00:06:21.254 { 00:06:21.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.254 "dma_device_type": 2 00:06:21.254 } 00:06:21.254 ], 00:06:21.254 "driver_specific": {} 00:06:21.254 } 00:06:21.254 ]' 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.254 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.254 [2024-11-26 18:07:55.673879] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:21.255 [2024-11-26 18:07:55.673971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:21.255 [2024-11-26 18:07:55.674007] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:21.255 [2024-11-26 18:07:55.674031] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:21.255 [2024-11-26 18:07:55.677191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:21.255 [2024-11-26 18:07:55.677403] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:21.255 Passthru0 00:06:21.255 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.255 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:21.255 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.255 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.255 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.255 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:21.255 { 00:06:21.255 "name": "Malloc2", 00:06:21.255 "aliases": [ 00:06:21.255 "2ff7ae86-70bd-48c2-b58e-d0a252aa4517" 00:06:21.255 ], 00:06:21.255 "product_name": "Malloc disk", 00:06:21.255 "block_size": 512, 00:06:21.255 "num_blocks": 16384, 00:06:21.255 "uuid": "2ff7ae86-70bd-48c2-b58e-d0a252aa4517", 00:06:21.255 "assigned_rate_limits": { 00:06:21.255 "rw_ios_per_sec": 0, 00:06:21.255 "rw_mbytes_per_sec": 0, 00:06:21.255 "r_mbytes_per_sec": 0, 00:06:21.255 "w_mbytes_per_sec": 0 00:06:21.255 }, 00:06:21.255 "claimed": true, 00:06:21.255 "claim_type": "exclusive_write", 00:06:21.255 "zoned": false, 00:06:21.255 "supported_io_types": { 00:06:21.255 "read": true, 00:06:21.255 "write": true, 00:06:21.255 "unmap": true, 00:06:21.255 "flush": true, 00:06:21.255 "reset": true, 00:06:21.255 "nvme_admin": false, 00:06:21.255 "nvme_io": false, 00:06:21.255 "nvme_io_md": false, 00:06:21.255 "write_zeroes": true, 00:06:21.255 "zcopy": true, 00:06:21.255 "get_zone_info": false, 00:06:21.255 "zone_management": false, 00:06:21.255 "zone_append": false, 00:06:21.255 "compare": false, 00:06:21.255 "compare_and_write": false, 00:06:21.255 "abort": true, 00:06:21.255 "seek_hole": false, 00:06:21.255 "seek_data": false, 00:06:21.255 "copy": true, 00:06:21.255 "nvme_iov_md": false 00:06:21.255 }, 00:06:21.255 "memory_domains": [ 00:06:21.255 { 00:06:21.255 "dma_device_id": "system", 00:06:21.255 "dma_device_type": 1 00:06:21.255 }, 00:06:21.255 { 00:06:21.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.255 "dma_device_type": 2 00:06:21.255 } 00:06:21.255 ], 00:06:21.255 "driver_specific": {} 00:06:21.255 }, 00:06:21.255 { 00:06:21.255 "name": "Passthru0", 00:06:21.255 "aliases": [ 00:06:21.255 "a91fbdbd-4ed9-5fc8-8b62-674adb35fcc3" 00:06:21.255 ], 00:06:21.255 "product_name": "passthru", 00:06:21.255 "block_size": 512, 00:06:21.255 "num_blocks": 16384, 00:06:21.255 "uuid": "a91fbdbd-4ed9-5fc8-8b62-674adb35fcc3", 00:06:21.255 "assigned_rate_limits": { 00:06:21.255 "rw_ios_per_sec": 0, 00:06:21.255 "rw_mbytes_per_sec": 0, 00:06:21.255 "r_mbytes_per_sec": 0, 00:06:21.255 "w_mbytes_per_sec": 0 00:06:21.255 }, 00:06:21.255 "claimed": false, 00:06:21.255 "zoned": false, 00:06:21.255 "supported_io_types": { 00:06:21.255 "read": true, 00:06:21.255 "write": true, 00:06:21.255 "unmap": true, 00:06:21.255 "flush": true, 00:06:21.255 "reset": true, 00:06:21.255 "nvme_admin": false, 00:06:21.255 "nvme_io": false, 00:06:21.255 "nvme_io_md": false, 00:06:21.255 "write_zeroes": true, 00:06:21.255 "zcopy": true, 00:06:21.255 "get_zone_info": false, 00:06:21.255 "zone_management": false, 00:06:21.255 "zone_append": false, 00:06:21.255 "compare": false, 00:06:21.255 "compare_and_write": false, 00:06:21.255 "abort": true, 00:06:21.255 "seek_hole": false, 00:06:21.255 "seek_data": false, 00:06:21.255 "copy": true, 00:06:21.255 "nvme_iov_md": false 00:06:21.255 }, 00:06:21.255 "memory_domains": [ 00:06:21.255 { 00:06:21.255 "dma_device_id": "system", 00:06:21.255 "dma_device_type": 1 00:06:21.255 }, 00:06:21.255 { 00:06:21.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.255 "dma_device_type": 2 00:06:21.255 } 00:06:21.255 ], 00:06:21.255 "driver_specific": { 00:06:21.255 "passthru": { 00:06:21.255 "name": "Passthru0", 00:06:21.255 "base_bdev_name": "Malloc2" 00:06:21.255 } 00:06:21.255 } 00:06:21.255 } 00:06:21.255 ]' 00:06:21.255 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:21.513 ************************************ 00:06:21.513 END TEST rpc_daemon_integrity 00:06:21.513 ************************************ 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:21.513 00:06:21.513 real 0m0.353s 00:06:21.513 user 0m0.211s 00:06:21.513 sys 0m0.044s 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.513 18:07:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.513 18:07:55 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:21.513 18:07:55 rpc -- rpc/rpc.sh@84 -- # killprocess 57944 00:06:21.513 18:07:55 rpc -- common/autotest_common.sh@954 -- # '[' -z 57944 ']' 00:06:21.513 18:07:55 rpc -- common/autotest_common.sh@958 -- # kill -0 57944 00:06:21.513 18:07:55 rpc -- common/autotest_common.sh@959 -- # uname 00:06:21.513 18:07:55 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.513 18:07:55 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57944 00:06:21.513 killing process with pid 57944 00:06:21.513 18:07:55 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.513 18:07:55 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.513 18:07:55 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57944' 00:06:21.513 18:07:55 rpc -- common/autotest_common.sh@973 -- # kill 57944 00:06:21.513 18:07:55 rpc -- common/autotest_common.sh@978 -- # wait 57944 00:06:24.043 ************************************ 00:06:24.043 END TEST rpc 00:06:24.043 ************************************ 00:06:24.043 00:06:24.043 real 0m5.255s 00:06:24.043 user 0m5.917s 00:06:24.043 sys 0m0.918s 00:06:24.043 18:07:58 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.043 18:07:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.043 18:07:58 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:24.043 18:07:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.043 18:07:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.043 18:07:58 -- common/autotest_common.sh@10 -- # set +x 00:06:24.043 ************************************ 00:06:24.043 START TEST skip_rpc 00:06:24.043 ************************************ 00:06:24.043 18:07:58 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:24.043 * Looking for test storage... 00:06:24.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:24.043 18:07:58 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.043 18:07:58 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.043 18:07:58 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.043 18:07:58 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.043 18:07:58 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:24.043 18:07:58 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.043 18:07:58 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.043 --rc genhtml_branch_coverage=1 00:06:24.043 --rc genhtml_function_coverage=1 00:06:24.043 --rc genhtml_legend=1 00:06:24.043 --rc geninfo_all_blocks=1 00:06:24.043 --rc geninfo_unexecuted_blocks=1 00:06:24.043 00:06:24.043 ' 00:06:24.043 18:07:58 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.043 --rc genhtml_branch_coverage=1 00:06:24.043 --rc genhtml_function_coverage=1 00:06:24.043 --rc genhtml_legend=1 00:06:24.043 --rc geninfo_all_blocks=1 00:06:24.043 --rc geninfo_unexecuted_blocks=1 00:06:24.043 00:06:24.043 ' 00:06:24.043 18:07:58 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.043 --rc genhtml_branch_coverage=1 00:06:24.043 --rc genhtml_function_coverage=1 00:06:24.043 --rc genhtml_legend=1 00:06:24.043 --rc geninfo_all_blocks=1 00:06:24.043 --rc geninfo_unexecuted_blocks=1 00:06:24.043 00:06:24.043 ' 00:06:24.043 18:07:58 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.043 --rc genhtml_branch_coverage=1 00:06:24.043 --rc genhtml_function_coverage=1 00:06:24.043 --rc genhtml_legend=1 00:06:24.043 --rc geninfo_all_blocks=1 00:06:24.043 --rc geninfo_unexecuted_blocks=1 00:06:24.043 00:06:24.044 ' 00:06:24.044 18:07:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:24.044 18:07:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:24.044 18:07:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:24.044 18:07:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.044 18:07:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.044 18:07:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.044 ************************************ 00:06:24.044 START TEST skip_rpc 00:06:24.044 ************************************ 00:06:24.044 18:07:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:24.044 18:07:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58173 00:06:24.044 18:07:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.044 18:07:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:24.044 18:07:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:24.301 [2024-11-26 18:07:58.631286] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:06:24.301 [2024-11-26 18:07:58.631782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58173 ] 00:06:24.559 [2024-11-26 18:07:58.823800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.559 [2024-11-26 18:07:58.980447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58173 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58173 ']' 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58173 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58173 00:06:29.825 killing process with pid 58173 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58173' 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58173 00:06:29.825 18:08:03 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58173 00:06:31.726 ************************************ 00:06:31.726 END TEST skip_rpc 00:06:31.726 ************************************ 00:06:31.726 00:06:31.726 real 0m7.332s 00:06:31.726 user 0m6.728s 00:06:31.726 sys 0m0.495s 00:06:31.726 18:08:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.726 18:08:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.726 18:08:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:31.726 18:08:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.726 18:08:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.726 18:08:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.726 ************************************ 00:06:31.726 START TEST skip_rpc_with_json 00:06:31.726 ************************************ 00:06:31.726 18:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:31.726 18:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:31.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.726 18:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58277 00:06:31.726 18:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:31.726 18:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:31.726 18:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58277 00:06:31.726 18:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58277 ']' 00:06:31.726 18:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.726 18:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.726 18:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.726 18:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.726 18:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:31.726 [2024-11-26 18:08:06.023541] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:06:31.726 [2024-11-26 18:08:06.024092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58277 ] 00:06:31.986 [2024-11-26 18:08:06.206744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.986 [2024-11-26 18:08:06.366772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.961 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.961 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:32.961 18:08:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:32.961 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.961 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:32.961 [2024-11-26 18:08:07.264275] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:32.961 request: 00:06:32.961 { 00:06:32.961 "trtype": "tcp", 00:06:32.961 "method": "nvmf_get_transports", 00:06:32.961 "req_id": 1 00:06:32.961 } 00:06:32.961 Got JSON-RPC error response 00:06:32.961 response: 00:06:32.961 { 00:06:32.961 "code": -19, 00:06:32.961 "message": "No such device" 00:06:32.961 } 00:06:32.961 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:32.961 18:08:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:32.961 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.961 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:32.961 [2024-11-26 18:08:07.276415] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.961 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:32.961 18:08:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:32.961 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:32.961 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.220 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.220 18:08:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:33.220 { 00:06:33.220 "subsystems": [ 00:06:33.220 { 00:06:33.220 "subsystem": "fsdev", 00:06:33.220 "config": [ 00:06:33.220 { 00:06:33.220 "method": "fsdev_set_opts", 00:06:33.220 "params": { 00:06:33.220 "fsdev_io_pool_size": 65535, 00:06:33.220 "fsdev_io_cache_size": 256 00:06:33.220 } 00:06:33.220 } 00:06:33.220 ] 00:06:33.220 }, 00:06:33.220 { 00:06:33.220 "subsystem": "keyring", 00:06:33.220 "config": [] 00:06:33.220 }, 00:06:33.220 { 00:06:33.220 "subsystem": "iobuf", 00:06:33.220 "config": [ 00:06:33.220 { 00:06:33.220 "method": "iobuf_set_options", 00:06:33.220 "params": { 00:06:33.220 "small_pool_count": 8192, 00:06:33.220 "large_pool_count": 1024, 00:06:33.220 "small_bufsize": 8192, 00:06:33.220 "large_bufsize": 135168, 00:06:33.220 "enable_numa": false 00:06:33.220 } 00:06:33.220 } 00:06:33.220 ] 00:06:33.220 }, 00:06:33.220 { 00:06:33.220 "subsystem": "sock", 00:06:33.220 "config": [ 00:06:33.220 { 00:06:33.220 "method": "sock_set_default_impl", 00:06:33.220 "params": { 00:06:33.220 "impl_name": "posix" 00:06:33.220 } 00:06:33.220 }, 00:06:33.220 { 00:06:33.220 "method": "sock_impl_set_options", 00:06:33.220 "params": { 00:06:33.220 "impl_name": "ssl", 00:06:33.220 "recv_buf_size": 4096, 00:06:33.220 "send_buf_size": 4096, 00:06:33.220 "enable_recv_pipe": true, 00:06:33.220 "enable_quickack": false, 00:06:33.220 "enable_placement_id": 0, 00:06:33.220 "enable_zerocopy_send_server": true, 00:06:33.220 "enable_zerocopy_send_client": false, 00:06:33.220 "zerocopy_threshold": 0, 00:06:33.220 "tls_version": 0, 00:06:33.220 "enable_ktls": false 00:06:33.220 } 00:06:33.220 }, 00:06:33.220 { 00:06:33.220 "method": "sock_impl_set_options", 00:06:33.220 "params": { 00:06:33.220 "impl_name": "posix", 00:06:33.221 "recv_buf_size": 2097152, 00:06:33.221 "send_buf_size": 2097152, 00:06:33.221 "enable_recv_pipe": true, 00:06:33.221 "enable_quickack": false, 00:06:33.221 "enable_placement_id": 0, 00:06:33.221 "enable_zerocopy_send_server": true, 00:06:33.221 "enable_zerocopy_send_client": false, 00:06:33.221 "zerocopy_threshold": 0, 00:06:33.221 "tls_version": 0, 00:06:33.221 "enable_ktls": false 00:06:33.221 } 00:06:33.221 } 00:06:33.221 ] 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "subsystem": "vmd", 00:06:33.221 "config": [] 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "subsystem": "accel", 00:06:33.221 "config": [ 00:06:33.221 { 00:06:33.221 "method": "accel_set_options", 00:06:33.221 "params": { 00:06:33.221 "small_cache_size": 128, 00:06:33.221 "large_cache_size": 16, 00:06:33.221 "task_count": 2048, 00:06:33.221 "sequence_count": 2048, 00:06:33.221 "buf_count": 2048 00:06:33.221 } 00:06:33.221 } 00:06:33.221 ] 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "subsystem": "bdev", 00:06:33.221 "config": [ 00:06:33.221 { 00:06:33.221 "method": "bdev_set_options", 00:06:33.221 "params": { 00:06:33.221 "bdev_io_pool_size": 65535, 00:06:33.221 "bdev_io_cache_size": 256, 00:06:33.221 "bdev_auto_examine": true, 00:06:33.221 "iobuf_small_cache_size": 128, 00:06:33.221 "iobuf_large_cache_size": 16 00:06:33.221 } 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "method": "bdev_raid_set_options", 00:06:33.221 "params": { 00:06:33.221 "process_window_size_kb": 1024, 00:06:33.221 "process_max_bandwidth_mb_sec": 0 00:06:33.221 } 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "method": "bdev_iscsi_set_options", 00:06:33.221 "params": { 00:06:33.221 "timeout_sec": 30 00:06:33.221 } 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "method": "bdev_nvme_set_options", 00:06:33.221 "params": { 00:06:33.221 "action_on_timeout": "none", 00:06:33.221 "timeout_us": 0, 00:06:33.221 "timeout_admin_us": 0, 00:06:33.221 "keep_alive_timeout_ms": 10000, 00:06:33.221 "arbitration_burst": 0, 00:06:33.221 "low_priority_weight": 0, 00:06:33.221 "medium_priority_weight": 0, 00:06:33.221 "high_priority_weight": 0, 00:06:33.221 "nvme_adminq_poll_period_us": 10000, 00:06:33.221 "nvme_ioq_poll_period_us": 0, 00:06:33.221 "io_queue_requests": 0, 00:06:33.221 "delay_cmd_submit": true, 00:06:33.221 "transport_retry_count": 4, 00:06:33.221 "bdev_retry_count": 3, 00:06:33.221 "transport_ack_timeout": 0, 00:06:33.221 "ctrlr_loss_timeout_sec": 0, 00:06:33.221 "reconnect_delay_sec": 0, 00:06:33.221 "fast_io_fail_timeout_sec": 0, 00:06:33.221 "disable_auto_failback": false, 00:06:33.221 "generate_uuids": false, 00:06:33.221 "transport_tos": 0, 00:06:33.221 "nvme_error_stat": false, 00:06:33.221 "rdma_srq_size": 0, 00:06:33.221 "io_path_stat": false, 00:06:33.221 "allow_accel_sequence": false, 00:06:33.221 "rdma_max_cq_size": 0, 00:06:33.221 "rdma_cm_event_timeout_ms": 0, 00:06:33.221 "dhchap_digests": [ 00:06:33.221 "sha256", 00:06:33.221 "sha384", 00:06:33.221 "sha512" 00:06:33.221 ], 00:06:33.221 "dhchap_dhgroups": [ 00:06:33.221 "null", 00:06:33.221 "ffdhe2048", 00:06:33.221 "ffdhe3072", 00:06:33.221 "ffdhe4096", 00:06:33.221 "ffdhe6144", 00:06:33.221 "ffdhe8192" 00:06:33.221 ] 00:06:33.221 } 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "method": "bdev_nvme_set_hotplug", 00:06:33.221 "params": { 00:06:33.221 "period_us": 100000, 00:06:33.221 "enable": false 00:06:33.221 } 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "method": "bdev_wait_for_examine" 00:06:33.221 } 00:06:33.221 ] 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "subsystem": "scsi", 00:06:33.221 "config": null 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "subsystem": "scheduler", 00:06:33.221 "config": [ 00:06:33.221 { 00:06:33.221 "method": "framework_set_scheduler", 00:06:33.221 "params": { 00:06:33.221 "name": "static" 00:06:33.221 } 00:06:33.221 } 00:06:33.221 ] 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "subsystem": "vhost_scsi", 00:06:33.221 "config": [] 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "subsystem": "vhost_blk", 00:06:33.221 "config": [] 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "subsystem": "ublk", 00:06:33.221 "config": [] 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "subsystem": "nbd", 00:06:33.221 "config": [] 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "subsystem": "nvmf", 00:06:33.221 "config": [ 00:06:33.221 { 00:06:33.221 "method": "nvmf_set_config", 00:06:33.221 "params": { 00:06:33.221 "discovery_filter": "match_any", 00:06:33.221 "admin_cmd_passthru": { 00:06:33.221 "identify_ctrlr": false 00:06:33.221 }, 00:06:33.221 "dhchap_digests": [ 00:06:33.221 "sha256", 00:06:33.221 "sha384", 00:06:33.221 "sha512" 00:06:33.221 ], 00:06:33.221 "dhchap_dhgroups": [ 00:06:33.221 "null", 00:06:33.221 "ffdhe2048", 00:06:33.221 "ffdhe3072", 00:06:33.221 "ffdhe4096", 00:06:33.221 "ffdhe6144", 00:06:33.221 "ffdhe8192" 00:06:33.221 ] 00:06:33.221 } 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "method": "nvmf_set_max_subsystems", 00:06:33.221 "params": { 00:06:33.221 "max_subsystems": 1024 00:06:33.221 } 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "method": "nvmf_set_crdt", 00:06:33.221 "params": { 00:06:33.221 "crdt1": 0, 00:06:33.221 "crdt2": 0, 00:06:33.221 "crdt3": 0 00:06:33.221 } 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "method": "nvmf_create_transport", 00:06:33.221 "params": { 00:06:33.221 "trtype": "TCP", 00:06:33.221 "max_queue_depth": 128, 00:06:33.221 "max_io_qpairs_per_ctrlr": 127, 00:06:33.221 "in_capsule_data_size": 4096, 00:06:33.221 "max_io_size": 131072, 00:06:33.221 "io_unit_size": 131072, 00:06:33.221 "max_aq_depth": 128, 00:06:33.221 "num_shared_buffers": 511, 00:06:33.221 "buf_cache_size": 4294967295, 00:06:33.221 "dif_insert_or_strip": false, 00:06:33.221 "zcopy": false, 00:06:33.221 "c2h_success": true, 00:06:33.221 "sock_priority": 0, 00:06:33.221 "abort_timeout_sec": 1, 00:06:33.221 "ack_timeout": 0, 00:06:33.221 "data_wr_pool_size": 0 00:06:33.221 } 00:06:33.221 } 00:06:33.221 ] 00:06:33.221 }, 00:06:33.221 { 00:06:33.221 "subsystem": "iscsi", 00:06:33.221 "config": [ 00:06:33.221 { 00:06:33.221 "method": "iscsi_set_options", 00:06:33.221 "params": { 00:06:33.221 "node_base": "iqn.2016-06.io.spdk", 00:06:33.221 "max_sessions": 128, 00:06:33.221 "max_connections_per_session": 2, 00:06:33.221 "max_queue_depth": 64, 00:06:33.221 "default_time2wait": 2, 00:06:33.221 "default_time2retain": 20, 00:06:33.221 "first_burst_length": 8192, 00:06:33.221 "immediate_data": true, 00:06:33.221 "allow_duplicated_isid": false, 00:06:33.221 "error_recovery_level": 0, 00:06:33.221 "nop_timeout": 60, 00:06:33.221 "nop_in_interval": 30, 00:06:33.221 "disable_chap": false, 00:06:33.221 "require_chap": false, 00:06:33.221 "mutual_chap": false, 00:06:33.221 "chap_group": 0, 00:06:33.221 "max_large_datain_per_connection": 64, 00:06:33.221 "max_r2t_per_connection": 4, 00:06:33.221 "pdu_pool_size": 36864, 00:06:33.221 "immediate_data_pool_size": 16384, 00:06:33.221 "data_out_pool_size": 2048 00:06:33.221 } 00:06:33.221 } 00:06:33.221 ] 00:06:33.221 } 00:06:33.221 ] 00:06:33.221 } 00:06:33.221 18:08:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:33.221 18:08:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58277 00:06:33.221 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58277 ']' 00:06:33.221 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58277 00:06:33.221 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:33.221 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.221 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58277 00:06:33.221 killing process with pid 58277 00:06:33.221 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.221 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.221 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58277' 00:06:33.221 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58277 00:06:33.221 18:08:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58277 00:06:35.751 18:08:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58333 00:06:35.751 18:08:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:35.751 18:08:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:41.055 18:08:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58333 00:06:41.055 18:08:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58333 ']' 00:06:41.055 18:08:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58333 00:06:41.055 18:08:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:41.055 18:08:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.055 18:08:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58333 00:06:41.055 killing process with pid 58333 00:06:41.055 18:08:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:41.055 18:08:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:41.055 18:08:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58333' 00:06:41.055 18:08:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58333 00:06:41.055 18:08:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58333 00:06:42.955 18:08:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:42.955 18:08:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:42.955 00:06:42.955 real 0m11.226s 00:06:42.955 user 0m10.578s 00:06:42.955 sys 0m1.033s 00:06:42.955 ************************************ 00:06:42.955 END TEST skip_rpc_with_json 00:06:42.955 ************************************ 00:06:42.955 18:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.955 18:08:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:42.955 18:08:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:42.955 18:08:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.955 18:08:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.955 18:08:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.955 ************************************ 00:06:42.955 START TEST skip_rpc_with_delay 00:06:42.955 ************************************ 00:06:42.955 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:42.955 18:08:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:42.955 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:42.955 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:42.955 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.955 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.955 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.955 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.955 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.955 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.955 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:42.955 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:42.955 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:42.956 [2024-11-26 18:08:17.299539] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:42.956 ************************************ 00:06:42.956 END TEST skip_rpc_with_delay 00:06:42.956 ************************************ 00:06:42.956 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:42.956 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:42.956 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:42.956 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:42.956 00:06:42.956 real 0m0.209s 00:06:42.956 user 0m0.116s 00:06:42.956 sys 0m0.090s 00:06:42.956 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.956 18:08:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:42.956 18:08:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:43.213 18:08:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:43.213 18:08:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:43.213 18:08:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.213 18:08:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.213 18:08:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.213 ************************************ 00:06:43.213 START TEST exit_on_failed_rpc_init 00:06:43.213 ************************************ 00:06:43.213 18:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:43.213 18:08:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58465 00:06:43.213 18:08:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.213 18:08:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58465 00:06:43.213 18:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58465 ']' 00:06:43.213 18:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.214 18:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.214 18:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.214 18:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.214 18:08:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:43.214 [2024-11-26 18:08:17.555606] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:06:43.214 [2024-11-26 18:08:17.555804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58465 ] 00:06:43.471 [2024-11-26 18:08:17.741604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.471 [2024-11-26 18:08:17.875166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.407 18:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.407 18:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:44.407 18:08:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:44.407 18:08:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:44.407 18:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:44.407 18:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:44.407 18:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.407 18:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.407 18:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.407 18:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.407 18:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.407 18:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.407 18:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.407 18:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:44.407 18:08:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:44.665 [2024-11-26 18:08:18.906370] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:06:44.665 [2024-11-26 18:08:18.906630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58490 ] 00:06:44.665 [2024-11-26 18:08:19.088889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.935 [2024-11-26 18:08:19.221690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.935 [2024-11-26 18:08:19.221830] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:44.935 [2024-11-26 18:08:19.221853] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:44.935 [2024-11-26 18:08:19.221876] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58465 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58465 ']' 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58465 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58465 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.205 killing process with pid 58465 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58465' 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58465 00:06:45.205 18:08:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58465 00:06:47.749 00:06:47.749 real 0m4.397s 00:06:47.749 user 0m4.793s 00:06:47.749 sys 0m0.707s 00:06:47.749 ************************************ 00:06:47.749 END TEST exit_on_failed_rpc_init 00:06:47.749 ************************************ 00:06:47.749 18:08:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.749 18:08:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:47.749 18:08:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:47.749 00:06:47.749 real 0m23.563s 00:06:47.749 user 0m22.390s 00:06:47.749 sys 0m2.536s 00:06:47.749 18:08:21 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.749 ************************************ 00:06:47.749 END TEST skip_rpc 00:06:47.749 ************************************ 00:06:47.749 18:08:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.749 18:08:21 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:47.749 18:08:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.749 18:08:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.749 18:08:21 -- common/autotest_common.sh@10 -- # set +x 00:06:47.749 ************************************ 00:06:47.749 START TEST rpc_client 00:06:47.749 ************************************ 00:06:47.749 18:08:21 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:47.749 * Looking for test storage... 00:06:47.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:47.749 18:08:22 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.749 18:08:22 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.749 18:08:22 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.749 18:08:22 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.749 18:08:22 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:47.749 18:08:22 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.749 18:08:22 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.749 --rc genhtml_branch_coverage=1 00:06:47.749 --rc genhtml_function_coverage=1 00:06:47.749 --rc genhtml_legend=1 00:06:47.749 --rc geninfo_all_blocks=1 00:06:47.749 --rc geninfo_unexecuted_blocks=1 00:06:47.749 00:06:47.749 ' 00:06:47.749 18:08:22 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.749 --rc genhtml_branch_coverage=1 00:06:47.749 --rc genhtml_function_coverage=1 00:06:47.749 --rc genhtml_legend=1 00:06:47.749 --rc geninfo_all_blocks=1 00:06:47.749 --rc geninfo_unexecuted_blocks=1 00:06:47.749 00:06:47.749 ' 00:06:47.749 18:08:22 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.749 --rc genhtml_branch_coverage=1 00:06:47.749 --rc genhtml_function_coverage=1 00:06:47.749 --rc genhtml_legend=1 00:06:47.749 --rc geninfo_all_blocks=1 00:06:47.749 --rc geninfo_unexecuted_blocks=1 00:06:47.749 00:06:47.749 ' 00:06:47.749 18:08:22 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.749 --rc genhtml_branch_coverage=1 00:06:47.749 --rc genhtml_function_coverage=1 00:06:47.749 --rc genhtml_legend=1 00:06:47.749 --rc geninfo_all_blocks=1 00:06:47.749 --rc geninfo_unexecuted_blocks=1 00:06:47.749 00:06:47.749 ' 00:06:47.750 18:08:22 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:47.750 OK 00:06:47.750 18:08:22 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:47.750 00:06:47.750 real 0m0.256s 00:06:47.750 user 0m0.142s 00:06:47.750 sys 0m0.125s 00:06:47.750 18:08:22 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.750 ************************************ 00:06:47.750 END TEST rpc_client 00:06:47.750 ************************************ 00:06:47.750 18:08:22 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:48.007 18:08:22 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:48.007 18:08:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.007 18:08:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.007 18:08:22 -- common/autotest_common.sh@10 -- # set +x 00:06:48.007 ************************************ 00:06:48.007 START TEST json_config 00:06:48.007 ************************************ 00:06:48.007 18:08:22 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:48.007 18:08:22 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.007 18:08:22 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.007 18:08:22 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.007 18:08:22 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.007 18:08:22 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.007 18:08:22 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.007 18:08:22 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.007 18:08:22 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.007 18:08:22 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.007 18:08:22 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.007 18:08:22 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.007 18:08:22 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.007 18:08:22 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.007 18:08:22 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.007 18:08:22 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.007 18:08:22 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:48.007 18:08:22 json_config -- scripts/common.sh@345 -- # : 1 00:06:48.007 18:08:22 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.007 18:08:22 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.007 18:08:22 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:48.007 18:08:22 json_config -- scripts/common.sh@353 -- # local d=1 00:06:48.007 18:08:22 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.007 18:08:22 json_config -- scripts/common.sh@355 -- # echo 1 00:06:48.007 18:08:22 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.007 18:08:22 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:48.007 18:08:22 json_config -- scripts/common.sh@353 -- # local d=2 00:06:48.007 18:08:22 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.007 18:08:22 json_config -- scripts/common.sh@355 -- # echo 2 00:06:48.007 18:08:22 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.007 18:08:22 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.007 18:08:22 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.007 18:08:22 json_config -- scripts/common.sh@368 -- # return 0 00:06:48.007 18:08:22 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.007 18:08:22 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.007 --rc genhtml_branch_coverage=1 00:06:48.007 --rc genhtml_function_coverage=1 00:06:48.007 --rc genhtml_legend=1 00:06:48.007 --rc geninfo_all_blocks=1 00:06:48.007 --rc geninfo_unexecuted_blocks=1 00:06:48.007 00:06:48.007 ' 00:06:48.007 18:08:22 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.007 --rc genhtml_branch_coverage=1 00:06:48.007 --rc genhtml_function_coverage=1 00:06:48.007 --rc genhtml_legend=1 00:06:48.007 --rc geninfo_all_blocks=1 00:06:48.007 --rc geninfo_unexecuted_blocks=1 00:06:48.007 00:06:48.007 ' 00:06:48.007 18:08:22 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.007 --rc genhtml_branch_coverage=1 00:06:48.007 --rc genhtml_function_coverage=1 00:06:48.007 --rc genhtml_legend=1 00:06:48.007 --rc geninfo_all_blocks=1 00:06:48.007 --rc geninfo_unexecuted_blocks=1 00:06:48.007 00:06:48.007 ' 00:06:48.007 18:08:22 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.007 --rc genhtml_branch_coverage=1 00:06:48.007 --rc genhtml_function_coverage=1 00:06:48.007 --rc genhtml_legend=1 00:06:48.007 --rc geninfo_all_blocks=1 00:06:48.007 --rc geninfo_unexecuted_blocks=1 00:06:48.007 00:06:48.007 ' 00:06:48.007 18:08:22 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:337a1433-e489-415d-a6d5-4412432ba66c 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=337a1433-e489-415d-a6d5-4412432ba66c 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.007 18:08:22 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:48.007 18:08:22 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:48.007 18:08:22 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.007 18:08:22 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.007 18:08:22 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.008 18:08:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.008 18:08:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.008 18:08:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.008 18:08:22 json_config -- paths/export.sh@5 -- # export PATH 00:06:48.008 18:08:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.008 18:08:22 json_config -- nvmf/common.sh@51 -- # : 0 00:06:48.008 18:08:22 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:48.008 18:08:22 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:48.008 18:08:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.008 18:08:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.008 18:08:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.008 18:08:22 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:48.008 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:48.008 18:08:22 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:48.008 18:08:22 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:48.008 18:08:22 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:48.008 18:08:22 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:48.008 18:08:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:48.008 18:08:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:48.008 18:08:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:48.008 18:08:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:48.008 WARNING: No tests are enabled so not running JSON configuration tests 00:06:48.008 18:08:22 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:48.008 18:08:22 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:48.008 00:06:48.008 real 0m0.189s 00:06:48.008 user 0m0.124s 00:06:48.008 sys 0m0.072s 00:06:48.008 18:08:22 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.008 ************************************ 00:06:48.008 END TEST json_config 00:06:48.008 ************************************ 00:06:48.008 18:08:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.008 18:08:22 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:48.008 18:08:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.008 18:08:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.008 18:08:22 -- common/autotest_common.sh@10 -- # set +x 00:06:48.267 ************************************ 00:06:48.267 START TEST json_config_extra_key 00:06:48.267 ************************************ 00:06:48.267 18:08:22 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:48.267 18:08:22 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.267 18:08:22 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.267 18:08:22 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.267 18:08:22 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:48.267 18:08:22 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.267 18:08:22 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.267 --rc genhtml_branch_coverage=1 00:06:48.267 --rc genhtml_function_coverage=1 00:06:48.267 --rc genhtml_legend=1 00:06:48.267 --rc geninfo_all_blocks=1 00:06:48.267 --rc geninfo_unexecuted_blocks=1 00:06:48.267 00:06:48.267 ' 00:06:48.267 18:08:22 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.267 --rc genhtml_branch_coverage=1 00:06:48.267 --rc genhtml_function_coverage=1 00:06:48.267 --rc genhtml_legend=1 00:06:48.267 --rc geninfo_all_blocks=1 00:06:48.267 --rc geninfo_unexecuted_blocks=1 00:06:48.267 00:06:48.267 ' 00:06:48.267 18:08:22 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.267 --rc genhtml_branch_coverage=1 00:06:48.267 --rc genhtml_function_coverage=1 00:06:48.267 --rc genhtml_legend=1 00:06:48.267 --rc geninfo_all_blocks=1 00:06:48.267 --rc geninfo_unexecuted_blocks=1 00:06:48.267 00:06:48.267 ' 00:06:48.267 18:08:22 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.267 --rc genhtml_branch_coverage=1 00:06:48.267 --rc genhtml_function_coverage=1 00:06:48.267 --rc genhtml_legend=1 00:06:48.267 --rc geninfo_all_blocks=1 00:06:48.267 --rc geninfo_unexecuted_blocks=1 00:06:48.267 00:06:48.267 ' 00:06:48.267 18:08:22 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:337a1433-e489-415d-a6d5-4412432ba66c 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=337a1433-e489-415d-a6d5-4412432ba66c 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.267 18:08:22 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.267 18:08:22 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.267 18:08:22 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.267 18:08:22 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.267 18:08:22 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:48.267 18:08:22 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:48.267 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:48.267 18:08:22 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:48.267 18:08:22 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:48.267 18:08:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:48.268 18:08:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:48.268 18:08:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:48.268 18:08:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:48.268 18:08:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:48.268 18:08:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:48.268 18:08:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:48.268 18:08:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:48.268 18:08:22 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:48.268 18:08:22 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:48.268 INFO: launching applications... 00:06:48.268 18:08:22 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:48.268 18:08:22 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:48.268 18:08:22 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:48.268 18:08:22 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:48.268 18:08:22 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:48.268 18:08:22 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:48.268 18:08:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:48.268 18:08:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:48.268 18:08:22 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58700 00:06:48.268 18:08:22 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:48.268 Waiting for target to run... 00:06:48.268 18:08:22 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:48.268 18:08:22 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58700 /var/tmp/spdk_tgt.sock 00:06:48.268 18:08:22 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58700 ']' 00:06:48.268 18:08:22 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:48.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:48.268 18:08:22 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.268 18:08:22 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:48.268 18:08:22 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.268 18:08:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:48.526 [2024-11-26 18:08:22.780733] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:06:48.526 [2024-11-26 18:08:22.780950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58700 ] 00:06:49.093 [2024-11-26 18:08:23.277282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.093 [2024-11-26 18:08:23.415208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.029 00:06:50.029 INFO: shutting down applications... 00:06:50.029 18:08:24 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.029 18:08:24 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:50.029 18:08:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:50.029 18:08:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:50.029 18:08:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:50.029 18:08:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:50.029 18:08:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:50.029 18:08:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58700 ]] 00:06:50.029 18:08:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58700 00:06:50.029 18:08:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:50.029 18:08:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:50.029 18:08:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58700 00:06:50.029 18:08:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:50.287 18:08:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:50.287 18:08:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:50.287 18:08:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58700 00:06:50.287 18:08:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:50.925 18:08:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:50.925 18:08:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:50.925 18:08:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58700 00:06:50.925 18:08:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:51.492 18:08:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:51.492 18:08:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:51.492 18:08:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58700 00:06:51.492 18:08:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:51.750 18:08:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:51.750 18:08:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:51.750 18:08:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58700 00:06:51.750 18:08:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:52.317 18:08:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:52.317 18:08:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:52.317 18:08:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58700 00:06:52.317 18:08:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:52.883 18:08:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:52.883 18:08:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:52.883 18:08:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58700 00:06:52.883 18:08:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:52.883 18:08:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:52.883 18:08:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:52.883 SPDK target shutdown done 00:06:52.884 18:08:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:52.884 Success 00:06:52.884 18:08:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:52.884 00:06:52.884 real 0m4.733s 00:06:52.884 user 0m4.165s 00:06:52.884 sys 0m0.695s 00:06:52.884 18:08:27 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.884 18:08:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:52.884 ************************************ 00:06:52.884 END TEST json_config_extra_key 00:06:52.884 ************************************ 00:06:52.884 18:08:27 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:52.884 18:08:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.884 18:08:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.884 18:08:27 -- common/autotest_common.sh@10 -- # set +x 00:06:52.884 ************************************ 00:06:52.884 START TEST alias_rpc 00:06:52.884 ************************************ 00:06:52.884 18:08:27 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:52.884 * Looking for test storage... 00:06:52.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:52.884 18:08:27 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:52.884 18:08:27 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:52.884 18:08:27 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:53.142 18:08:27 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.142 18:08:27 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:53.142 18:08:27 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.142 18:08:27 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:53.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.142 --rc genhtml_branch_coverage=1 00:06:53.142 --rc genhtml_function_coverage=1 00:06:53.142 --rc genhtml_legend=1 00:06:53.142 --rc geninfo_all_blocks=1 00:06:53.142 --rc geninfo_unexecuted_blocks=1 00:06:53.142 00:06:53.142 ' 00:06:53.142 18:08:27 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:53.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.142 --rc genhtml_branch_coverage=1 00:06:53.142 --rc genhtml_function_coverage=1 00:06:53.142 --rc genhtml_legend=1 00:06:53.142 --rc geninfo_all_blocks=1 00:06:53.142 --rc geninfo_unexecuted_blocks=1 00:06:53.142 00:06:53.142 ' 00:06:53.142 18:08:27 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:53.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.142 --rc genhtml_branch_coverage=1 00:06:53.142 --rc genhtml_function_coverage=1 00:06:53.142 --rc genhtml_legend=1 00:06:53.142 --rc geninfo_all_blocks=1 00:06:53.142 --rc geninfo_unexecuted_blocks=1 00:06:53.142 00:06:53.142 ' 00:06:53.142 18:08:27 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:53.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.142 --rc genhtml_branch_coverage=1 00:06:53.142 --rc genhtml_function_coverage=1 00:06:53.142 --rc genhtml_legend=1 00:06:53.142 --rc geninfo_all_blocks=1 00:06:53.142 --rc geninfo_unexecuted_blocks=1 00:06:53.142 00:06:53.142 ' 00:06:53.142 18:08:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:53.142 18:08:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58806 00:06:53.142 18:08:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58806 00:06:53.142 18:08:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:53.142 18:08:27 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58806 ']' 00:06:53.142 18:08:27 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.142 18:08:27 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.142 18:08:27 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.142 18:08:27 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.142 18:08:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.142 [2024-11-26 18:08:27.557368] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:06:53.142 [2024-11-26 18:08:27.557591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58806 ] 00:06:53.401 [2024-11-26 18:08:27.739872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.661 [2024-11-26 18:08:27.869067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.596 18:08:28 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.596 18:08:28 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:54.596 18:08:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:54.872 18:08:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58806 00:06:54.872 18:08:29 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58806 ']' 00:06:54.872 18:08:29 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58806 00:06:54.872 18:08:29 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:54.872 18:08:29 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.872 18:08:29 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58806 00:06:54.872 18:08:29 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.872 18:08:29 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.872 killing process with pid 58806 00:06:54.872 18:08:29 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58806' 00:06:54.872 18:08:29 alias_rpc -- common/autotest_common.sh@973 -- # kill 58806 00:06:54.872 18:08:29 alias_rpc -- common/autotest_common.sh@978 -- # wait 58806 00:06:57.481 00:06:57.481 real 0m4.185s 00:06:57.481 user 0m4.293s 00:06:57.481 sys 0m0.669s 00:06:57.481 18:08:31 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.481 18:08:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.481 ************************************ 00:06:57.481 END TEST alias_rpc 00:06:57.481 ************************************ 00:06:57.481 18:08:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:57.481 18:08:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:57.481 18:08:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.481 18:08:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.481 18:08:31 -- common/autotest_common.sh@10 -- # set +x 00:06:57.481 ************************************ 00:06:57.481 START TEST spdkcli_tcp 00:06:57.481 ************************************ 00:06:57.481 18:08:31 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:57.481 * Looking for test storage... 00:06:57.481 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:57.481 18:08:31 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.481 18:08:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.481 18:08:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:57.481 18:08:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.481 18:08:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:57.481 18:08:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.481 18:08:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:57.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.481 --rc genhtml_branch_coverage=1 00:06:57.481 --rc genhtml_function_coverage=1 00:06:57.481 --rc genhtml_legend=1 00:06:57.481 --rc geninfo_all_blocks=1 00:06:57.481 --rc geninfo_unexecuted_blocks=1 00:06:57.481 00:06:57.481 ' 00:06:57.481 18:08:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:57.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.481 --rc genhtml_branch_coverage=1 00:06:57.481 --rc genhtml_function_coverage=1 00:06:57.481 --rc genhtml_legend=1 00:06:57.481 --rc geninfo_all_blocks=1 00:06:57.481 --rc geninfo_unexecuted_blocks=1 00:06:57.481 00:06:57.481 ' 00:06:57.481 18:08:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:57.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.481 --rc genhtml_branch_coverage=1 00:06:57.481 --rc genhtml_function_coverage=1 00:06:57.481 --rc genhtml_legend=1 00:06:57.481 --rc geninfo_all_blocks=1 00:06:57.481 --rc geninfo_unexecuted_blocks=1 00:06:57.481 00:06:57.481 ' 00:06:57.481 18:08:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:57.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.481 --rc genhtml_branch_coverage=1 00:06:57.481 --rc genhtml_function_coverage=1 00:06:57.481 --rc genhtml_legend=1 00:06:57.481 --rc geninfo_all_blocks=1 00:06:57.481 --rc geninfo_unexecuted_blocks=1 00:06:57.481 00:06:57.481 ' 00:06:57.481 18:08:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:57.481 18:08:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:57.482 18:08:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:57.482 18:08:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:57.482 18:08:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:57.482 18:08:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:57.482 18:08:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:57.482 18:08:31 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:57.482 18:08:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.482 18:08:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58913 00:06:57.482 18:08:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:57.482 18:08:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58913 00:06:57.482 18:08:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58913 ']' 00:06:57.482 18:08:31 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.482 18:08:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.482 18:08:31 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.482 18:08:31 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.482 18:08:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.482 [2024-11-26 18:08:31.787955] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:06:57.482 [2024-11-26 18:08:31.788119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58913 ] 00:06:57.739 [2024-11-26 18:08:31.971200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.739 [2024-11-26 18:08:32.105478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.739 [2024-11-26 18:08:32.105488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.673 18:08:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.673 18:08:32 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:58.673 18:08:33 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58940 00:06:58.673 18:08:33 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:58.673 18:08:33 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:58.931 [ 00:06:58.931 "bdev_malloc_delete", 00:06:58.931 "bdev_malloc_create", 00:06:58.931 "bdev_null_resize", 00:06:58.932 "bdev_null_delete", 00:06:58.932 "bdev_null_create", 00:06:58.932 "bdev_nvme_cuse_unregister", 00:06:58.932 "bdev_nvme_cuse_register", 00:06:58.932 "bdev_opal_new_user", 00:06:58.932 "bdev_opal_set_lock_state", 00:06:58.932 "bdev_opal_delete", 00:06:58.932 "bdev_opal_get_info", 00:06:58.932 "bdev_opal_create", 00:06:58.932 "bdev_nvme_opal_revert", 00:06:58.932 "bdev_nvme_opal_init", 00:06:58.932 "bdev_nvme_send_cmd", 00:06:58.932 "bdev_nvme_set_keys", 00:06:58.932 "bdev_nvme_get_path_iostat", 00:06:58.932 "bdev_nvme_get_mdns_discovery_info", 00:06:58.932 "bdev_nvme_stop_mdns_discovery", 00:06:58.932 "bdev_nvme_start_mdns_discovery", 00:06:58.932 "bdev_nvme_set_multipath_policy", 00:06:58.932 "bdev_nvme_set_preferred_path", 00:06:58.932 "bdev_nvme_get_io_paths", 00:06:58.932 "bdev_nvme_remove_error_injection", 00:06:58.932 "bdev_nvme_add_error_injection", 00:06:58.932 "bdev_nvme_get_discovery_info", 00:06:58.932 "bdev_nvme_stop_discovery", 00:06:58.932 "bdev_nvme_start_discovery", 00:06:58.932 "bdev_nvme_get_controller_health_info", 00:06:58.932 "bdev_nvme_disable_controller", 00:06:58.932 "bdev_nvme_enable_controller", 00:06:58.932 "bdev_nvme_reset_controller", 00:06:58.932 "bdev_nvme_get_transport_statistics", 00:06:58.932 "bdev_nvme_apply_firmware", 00:06:58.932 "bdev_nvme_detach_controller", 00:06:58.932 "bdev_nvme_get_controllers", 00:06:58.932 "bdev_nvme_attach_controller", 00:06:58.932 "bdev_nvme_set_hotplug", 00:06:58.932 "bdev_nvme_set_options", 00:06:58.932 "bdev_passthru_delete", 00:06:58.932 "bdev_passthru_create", 00:06:58.932 "bdev_lvol_set_parent_bdev", 00:06:58.932 "bdev_lvol_set_parent", 00:06:58.932 "bdev_lvol_check_shallow_copy", 00:06:58.932 "bdev_lvol_start_shallow_copy", 00:06:58.932 "bdev_lvol_grow_lvstore", 00:06:58.932 "bdev_lvol_get_lvols", 00:06:58.932 "bdev_lvol_get_lvstores", 00:06:58.932 "bdev_lvol_delete", 00:06:58.932 "bdev_lvol_set_read_only", 00:06:58.932 "bdev_lvol_resize", 00:06:58.932 "bdev_lvol_decouple_parent", 00:06:58.932 "bdev_lvol_inflate", 00:06:58.932 "bdev_lvol_rename", 00:06:58.932 "bdev_lvol_clone_bdev", 00:06:58.932 "bdev_lvol_clone", 00:06:58.932 "bdev_lvol_snapshot", 00:06:58.932 "bdev_lvol_create", 00:06:58.932 "bdev_lvol_delete_lvstore", 00:06:58.932 "bdev_lvol_rename_lvstore", 00:06:58.932 "bdev_lvol_create_lvstore", 00:06:58.932 "bdev_raid_set_options", 00:06:58.932 "bdev_raid_remove_base_bdev", 00:06:58.932 "bdev_raid_add_base_bdev", 00:06:58.932 "bdev_raid_delete", 00:06:58.932 "bdev_raid_create", 00:06:58.932 "bdev_raid_get_bdevs", 00:06:58.932 "bdev_error_inject_error", 00:06:58.932 "bdev_error_delete", 00:06:58.932 "bdev_error_create", 00:06:58.932 "bdev_split_delete", 00:06:58.932 "bdev_split_create", 00:06:58.932 "bdev_delay_delete", 00:06:58.932 "bdev_delay_create", 00:06:58.932 "bdev_delay_update_latency", 00:06:58.932 "bdev_zone_block_delete", 00:06:58.932 "bdev_zone_block_create", 00:06:58.932 "blobfs_create", 00:06:58.932 "blobfs_detect", 00:06:58.932 "blobfs_set_cache_size", 00:06:58.932 "bdev_xnvme_delete", 00:06:58.932 "bdev_xnvme_create", 00:06:58.932 "bdev_aio_delete", 00:06:58.932 "bdev_aio_rescan", 00:06:58.932 "bdev_aio_create", 00:06:58.932 "bdev_ftl_set_property", 00:06:58.932 "bdev_ftl_get_properties", 00:06:58.932 "bdev_ftl_get_stats", 00:06:58.932 "bdev_ftl_unmap", 00:06:58.932 "bdev_ftl_unload", 00:06:58.932 "bdev_ftl_delete", 00:06:58.932 "bdev_ftl_load", 00:06:58.932 "bdev_ftl_create", 00:06:58.932 "bdev_virtio_attach_controller", 00:06:58.932 "bdev_virtio_scsi_get_devices", 00:06:58.932 "bdev_virtio_detach_controller", 00:06:58.932 "bdev_virtio_blk_set_hotplug", 00:06:58.932 "bdev_iscsi_delete", 00:06:58.932 "bdev_iscsi_create", 00:06:58.932 "bdev_iscsi_set_options", 00:06:58.932 "accel_error_inject_error", 00:06:58.932 "ioat_scan_accel_module", 00:06:58.932 "dsa_scan_accel_module", 00:06:58.932 "iaa_scan_accel_module", 00:06:58.932 "keyring_file_remove_key", 00:06:58.932 "keyring_file_add_key", 00:06:58.932 "keyring_linux_set_options", 00:06:58.932 "fsdev_aio_delete", 00:06:58.932 "fsdev_aio_create", 00:06:58.932 "iscsi_get_histogram", 00:06:58.932 "iscsi_enable_histogram", 00:06:58.932 "iscsi_set_options", 00:06:58.932 "iscsi_get_auth_groups", 00:06:58.932 "iscsi_auth_group_remove_secret", 00:06:58.932 "iscsi_auth_group_add_secret", 00:06:58.932 "iscsi_delete_auth_group", 00:06:58.932 "iscsi_create_auth_group", 00:06:58.932 "iscsi_set_discovery_auth", 00:06:58.932 "iscsi_get_options", 00:06:58.932 "iscsi_target_node_request_logout", 00:06:58.932 "iscsi_target_node_set_redirect", 00:06:58.932 "iscsi_target_node_set_auth", 00:06:58.932 "iscsi_target_node_add_lun", 00:06:58.932 "iscsi_get_stats", 00:06:58.932 "iscsi_get_connections", 00:06:58.932 "iscsi_portal_group_set_auth", 00:06:58.932 "iscsi_start_portal_group", 00:06:58.932 "iscsi_delete_portal_group", 00:06:58.932 "iscsi_create_portal_group", 00:06:58.932 "iscsi_get_portal_groups", 00:06:58.932 "iscsi_delete_target_node", 00:06:58.932 "iscsi_target_node_remove_pg_ig_maps", 00:06:58.932 "iscsi_target_node_add_pg_ig_maps", 00:06:58.932 "iscsi_create_target_node", 00:06:58.932 "iscsi_get_target_nodes", 00:06:58.932 "iscsi_delete_initiator_group", 00:06:58.932 "iscsi_initiator_group_remove_initiators", 00:06:58.932 "iscsi_initiator_group_add_initiators", 00:06:58.932 "iscsi_create_initiator_group", 00:06:58.932 "iscsi_get_initiator_groups", 00:06:58.932 "nvmf_set_crdt", 00:06:58.932 "nvmf_set_config", 00:06:58.932 "nvmf_set_max_subsystems", 00:06:58.932 "nvmf_stop_mdns_prr", 00:06:58.932 "nvmf_publish_mdns_prr", 00:06:58.932 "nvmf_subsystem_get_listeners", 00:06:58.932 "nvmf_subsystem_get_qpairs", 00:06:58.932 "nvmf_subsystem_get_controllers", 00:06:58.932 "nvmf_get_stats", 00:06:58.932 "nvmf_get_transports", 00:06:58.932 "nvmf_create_transport", 00:06:58.932 "nvmf_get_targets", 00:06:58.932 "nvmf_delete_target", 00:06:58.932 "nvmf_create_target", 00:06:58.932 "nvmf_subsystem_allow_any_host", 00:06:58.932 "nvmf_subsystem_set_keys", 00:06:58.932 "nvmf_subsystem_remove_host", 00:06:58.932 "nvmf_subsystem_add_host", 00:06:58.932 "nvmf_ns_remove_host", 00:06:58.932 "nvmf_ns_add_host", 00:06:58.932 "nvmf_subsystem_remove_ns", 00:06:58.932 "nvmf_subsystem_set_ns_ana_group", 00:06:58.932 "nvmf_subsystem_add_ns", 00:06:58.932 "nvmf_subsystem_listener_set_ana_state", 00:06:58.932 "nvmf_discovery_get_referrals", 00:06:58.932 "nvmf_discovery_remove_referral", 00:06:58.932 "nvmf_discovery_add_referral", 00:06:58.932 "nvmf_subsystem_remove_listener", 00:06:58.932 "nvmf_subsystem_add_listener", 00:06:58.932 "nvmf_delete_subsystem", 00:06:58.932 "nvmf_create_subsystem", 00:06:58.932 "nvmf_get_subsystems", 00:06:58.932 "env_dpdk_get_mem_stats", 00:06:58.932 "nbd_get_disks", 00:06:58.932 "nbd_stop_disk", 00:06:58.932 "nbd_start_disk", 00:06:58.932 "ublk_recover_disk", 00:06:58.932 "ublk_get_disks", 00:06:58.932 "ublk_stop_disk", 00:06:58.932 "ublk_start_disk", 00:06:58.932 "ublk_destroy_target", 00:06:58.932 "ublk_create_target", 00:06:58.932 "virtio_blk_create_transport", 00:06:58.932 "virtio_blk_get_transports", 00:06:58.932 "vhost_controller_set_coalescing", 00:06:58.932 "vhost_get_controllers", 00:06:58.932 "vhost_delete_controller", 00:06:58.932 "vhost_create_blk_controller", 00:06:58.932 "vhost_scsi_controller_remove_target", 00:06:58.932 "vhost_scsi_controller_add_target", 00:06:58.932 "vhost_start_scsi_controller", 00:06:58.932 "vhost_create_scsi_controller", 00:06:58.932 "thread_set_cpumask", 00:06:58.932 "scheduler_set_options", 00:06:58.932 "framework_get_governor", 00:06:58.932 "framework_get_scheduler", 00:06:58.932 "framework_set_scheduler", 00:06:58.932 "framework_get_reactors", 00:06:58.932 "thread_get_io_channels", 00:06:58.932 "thread_get_pollers", 00:06:58.932 "thread_get_stats", 00:06:58.932 "framework_monitor_context_switch", 00:06:58.932 "spdk_kill_instance", 00:06:58.932 "log_enable_timestamps", 00:06:58.932 "log_get_flags", 00:06:58.932 "log_clear_flag", 00:06:58.932 "log_set_flag", 00:06:58.932 "log_get_level", 00:06:58.932 "log_set_level", 00:06:58.933 "log_get_print_level", 00:06:58.933 "log_set_print_level", 00:06:58.933 "framework_enable_cpumask_locks", 00:06:58.933 "framework_disable_cpumask_locks", 00:06:58.933 "framework_wait_init", 00:06:58.933 "framework_start_init", 00:06:58.933 "scsi_get_devices", 00:06:58.933 "bdev_get_histogram", 00:06:58.933 "bdev_enable_histogram", 00:06:58.933 "bdev_set_qos_limit", 00:06:58.933 "bdev_set_qd_sampling_period", 00:06:58.933 "bdev_get_bdevs", 00:06:58.933 "bdev_reset_iostat", 00:06:58.933 "bdev_get_iostat", 00:06:58.933 "bdev_examine", 00:06:58.933 "bdev_wait_for_examine", 00:06:58.933 "bdev_set_options", 00:06:58.933 "accel_get_stats", 00:06:58.933 "accel_set_options", 00:06:58.933 "accel_set_driver", 00:06:58.933 "accel_crypto_key_destroy", 00:06:58.933 "accel_crypto_keys_get", 00:06:58.933 "accel_crypto_key_create", 00:06:58.933 "accel_assign_opc", 00:06:58.933 "accel_get_module_info", 00:06:58.933 "accel_get_opc_assignments", 00:06:58.933 "vmd_rescan", 00:06:58.933 "vmd_remove_device", 00:06:58.933 "vmd_enable", 00:06:58.933 "sock_get_default_impl", 00:06:58.933 "sock_set_default_impl", 00:06:58.933 "sock_impl_set_options", 00:06:58.933 "sock_impl_get_options", 00:06:58.933 "iobuf_get_stats", 00:06:58.933 "iobuf_set_options", 00:06:58.933 "keyring_get_keys", 00:06:58.933 "framework_get_pci_devices", 00:06:58.933 "framework_get_config", 00:06:58.933 "framework_get_subsystems", 00:06:58.933 "fsdev_set_opts", 00:06:58.933 "fsdev_get_opts", 00:06:58.933 "trace_get_info", 00:06:58.933 "trace_get_tpoint_group_mask", 00:06:58.933 "trace_disable_tpoint_group", 00:06:58.933 "trace_enable_tpoint_group", 00:06:58.933 "trace_clear_tpoint_mask", 00:06:58.933 "trace_set_tpoint_mask", 00:06:58.933 "notify_get_notifications", 00:06:58.933 "notify_get_types", 00:06:58.933 "spdk_get_version", 00:06:58.933 "rpc_get_methods" 00:06:58.933 ] 00:06:58.933 18:08:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:58.933 18:08:33 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:58.933 18:08:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:58.933 18:08:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:58.933 18:08:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58913 00:06:58.933 18:08:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58913 ']' 00:06:58.933 18:08:33 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58913 00:06:58.933 18:08:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:58.933 18:08:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.933 18:08:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58913 00:06:58.933 18:08:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.933 18:08:33 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.933 18:08:33 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58913' 00:06:58.933 killing process with pid 58913 00:06:58.933 18:08:33 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58913 00:06:58.933 18:08:33 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58913 00:07:01.466 00:07:01.466 real 0m4.206s 00:07:01.466 user 0m7.565s 00:07:01.466 sys 0m0.701s 00:07:01.466 18:08:35 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.466 18:08:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.466 ************************************ 00:07:01.466 END TEST spdkcli_tcp 00:07:01.466 ************************************ 00:07:01.466 18:08:35 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:01.466 18:08:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.466 18:08:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.466 18:08:35 -- common/autotest_common.sh@10 -- # set +x 00:07:01.466 ************************************ 00:07:01.466 START TEST dpdk_mem_utility 00:07:01.466 ************************************ 00:07:01.466 18:08:35 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:01.466 * Looking for test storage... 00:07:01.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:01.466 18:08:35 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.466 18:08:35 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.466 18:08:35 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.466 18:08:35 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:01.466 18:08:35 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.724 18:08:35 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:01.724 18:08:35 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.724 18:08:35 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.724 18:08:35 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.724 18:08:35 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:01.725 18:08:35 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.725 18:08:35 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.725 --rc genhtml_branch_coverage=1 00:07:01.725 --rc genhtml_function_coverage=1 00:07:01.725 --rc genhtml_legend=1 00:07:01.725 --rc geninfo_all_blocks=1 00:07:01.725 --rc geninfo_unexecuted_blocks=1 00:07:01.725 00:07:01.725 ' 00:07:01.725 18:08:35 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.725 --rc genhtml_branch_coverage=1 00:07:01.725 --rc genhtml_function_coverage=1 00:07:01.725 --rc genhtml_legend=1 00:07:01.725 --rc geninfo_all_blocks=1 00:07:01.725 --rc geninfo_unexecuted_blocks=1 00:07:01.725 00:07:01.725 ' 00:07:01.725 18:08:35 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.725 --rc genhtml_branch_coverage=1 00:07:01.725 --rc genhtml_function_coverage=1 00:07:01.725 --rc genhtml_legend=1 00:07:01.725 --rc geninfo_all_blocks=1 00:07:01.725 --rc geninfo_unexecuted_blocks=1 00:07:01.725 00:07:01.725 ' 00:07:01.725 18:08:35 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.725 --rc genhtml_branch_coverage=1 00:07:01.725 --rc genhtml_function_coverage=1 00:07:01.725 --rc genhtml_legend=1 00:07:01.725 --rc geninfo_all_blocks=1 00:07:01.725 --rc geninfo_unexecuted_blocks=1 00:07:01.725 00:07:01.725 ' 00:07:01.725 18:08:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:01.725 18:08:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59041 00:07:01.725 18:08:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59041 00:07:01.725 18:08:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:01.725 18:08:35 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59041 ']' 00:07:01.725 18:08:35 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.725 18:08:35 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.725 18:08:35 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.725 18:08:35 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.725 18:08:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:01.725 [2024-11-26 18:08:36.063145] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:07:01.725 [2024-11-26 18:08:36.063336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59041 ] 00:07:01.983 [2024-11-26 18:08:36.314974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.241 [2024-11-26 18:08:36.466368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.177 18:08:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.177 18:08:37 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:03.177 18:08:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:03.177 18:08:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:03.177 18:08:37 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.177 18:08:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:03.177 { 00:07:03.177 "filename": "/tmp/spdk_mem_dump.txt" 00:07:03.177 } 00:07:03.177 18:08:37 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.177 18:08:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:03.177 DPDK memory size 824.000000 MiB in 1 heap(s) 00:07:03.177 1 heaps totaling size 824.000000 MiB 00:07:03.177 size: 824.000000 MiB heap id: 0 00:07:03.177 end heaps---------- 00:07:03.177 9 mempools totaling size 603.782043 MiB 00:07:03.177 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:03.177 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:03.177 size: 100.555481 MiB name: bdev_io_59041 00:07:03.177 size: 50.003479 MiB name: msgpool_59041 00:07:03.177 size: 36.509338 MiB name: fsdev_io_59041 00:07:03.177 size: 21.763794 MiB name: PDU_Pool 00:07:03.177 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:03.177 size: 4.133484 MiB name: evtpool_59041 00:07:03.177 size: 0.026123 MiB name: Session_Pool 00:07:03.177 end mempools------- 00:07:03.177 6 memzones totaling size 4.142822 MiB 00:07:03.177 size: 1.000366 MiB name: RG_ring_0_59041 00:07:03.177 size: 1.000366 MiB name: RG_ring_1_59041 00:07:03.177 size: 1.000366 MiB name: RG_ring_4_59041 00:07:03.177 size: 1.000366 MiB name: RG_ring_5_59041 00:07:03.177 size: 0.125366 MiB name: RG_ring_2_59041 00:07:03.177 size: 0.015991 MiB name: RG_ring_3_59041 00:07:03.177 end memzones------- 00:07:03.177 18:08:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:03.177 heap id: 0 total size: 824.000000 MiB number of busy elements: 310 number of free elements: 18 00:07:03.177 list of free elements. size: 16.782593 MiB 00:07:03.177 element at address: 0x200006400000 with size: 1.995972 MiB 00:07:03.177 element at address: 0x20000a600000 with size: 1.995972 MiB 00:07:03.177 element at address: 0x200003e00000 with size: 1.991028 MiB 00:07:03.177 element at address: 0x200019500040 with size: 0.999939 MiB 00:07:03.177 element at address: 0x200019900040 with size: 0.999939 MiB 00:07:03.177 element at address: 0x200019a00000 with size: 0.999084 MiB 00:07:03.177 element at address: 0x200032600000 with size: 0.994324 MiB 00:07:03.177 element at address: 0x200000400000 with size: 0.992004 MiB 00:07:03.177 element at address: 0x200019200000 with size: 0.959656 MiB 00:07:03.177 element at address: 0x200019d00040 with size: 0.936401 MiB 00:07:03.177 element at address: 0x200000200000 with size: 0.716980 MiB 00:07:03.177 element at address: 0x20001b400000 with size: 0.564148 MiB 00:07:03.177 element at address: 0x200000c00000 with size: 0.489197 MiB 00:07:03.177 element at address: 0x200019600000 with size: 0.487976 MiB 00:07:03.177 element at address: 0x200019e00000 with size: 0.485413 MiB 00:07:03.177 element at address: 0x200012c00000 with size: 0.433228 MiB 00:07:03.177 element at address: 0x200028800000 with size: 0.390442 MiB 00:07:03.177 element at address: 0x200000800000 with size: 0.350891 MiB 00:07:03.177 list of standard malloc elements. size: 199.286499 MiB 00:07:03.177 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:07:03.177 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:07:03.177 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:03.177 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:07:03.177 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:07:03.177 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:03.177 element at address: 0x200019deff40 with size: 0.062683 MiB 00:07:03.177 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:03.178 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:07:03.178 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:07:03.178 element at address: 0x200012bff040 with size: 0.000305 MiB 00:07:03.178 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200000cff000 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012bff180 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012bff280 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012bff380 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012bff480 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012bff580 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012bff680 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012bff780 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012bff880 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012bff980 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200019affc40 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:07:03.178 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:07:03.179 element at address: 0x200028863f40 with size: 0.000244 MiB 00:07:03.179 element at address: 0x200028864040 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886af80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886b080 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886b180 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886b280 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886b380 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886b480 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886b580 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886b680 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886b780 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886b880 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886b980 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886be80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886c080 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886c180 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886c280 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886c380 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886c480 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886c580 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886c680 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886c780 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886c880 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886c980 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886d080 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886d180 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886d280 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886d380 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886d480 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886d580 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886d680 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886d780 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886d880 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886d980 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886da80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886db80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886de80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886df80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886e080 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886e180 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886e280 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886e380 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886e480 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886e580 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886e680 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886e780 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886e880 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886e980 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886f080 with size: 0.000244 MiB 00:07:03.179 element at address: 0x20002886f180 with size: 0.000244 MiB 00:07:03.180 element at address: 0x20002886f280 with size: 0.000244 MiB 00:07:03.180 element at address: 0x20002886f380 with size: 0.000244 MiB 00:07:03.180 element at address: 0x20002886f480 with size: 0.000244 MiB 00:07:03.180 element at address: 0x20002886f580 with size: 0.000244 MiB 00:07:03.180 element at address: 0x20002886f680 with size: 0.000244 MiB 00:07:03.180 element at address: 0x20002886f780 with size: 0.000244 MiB 00:07:03.180 element at address: 0x20002886f880 with size: 0.000244 MiB 00:07:03.180 element at address: 0x20002886f980 with size: 0.000244 MiB 00:07:03.180 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:07:03.180 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:07:03.180 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:07:03.180 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:07:03.180 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:07:03.180 list of memzone associated elements. size: 607.930908 MiB 00:07:03.180 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:07:03.180 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:03.180 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:07:03.180 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:03.180 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:07:03.180 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59041_0 00:07:03.180 element at address: 0x200000dff340 with size: 48.003113 MiB 00:07:03.180 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59041_0 00:07:03.180 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:07:03.180 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59041_0 00:07:03.180 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:07:03.180 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:03.180 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:07:03.180 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:03.180 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:07:03.180 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59041_0 00:07:03.180 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:07:03.180 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59041 00:07:03.180 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:03.180 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59041 00:07:03.180 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:07:03.180 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:03.180 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:07:03.180 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:03.180 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:07:03.180 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:03.180 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:07:03.180 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:03.180 element at address: 0x200000cff100 with size: 1.000549 MiB 00:07:03.180 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59041 00:07:03.180 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:07:03.180 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59041 00:07:03.180 element at address: 0x200019affd40 with size: 1.000549 MiB 00:07:03.180 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59041 00:07:03.180 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:07:03.180 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59041 00:07:03.180 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:07:03.180 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59041 00:07:03.180 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:07:03.180 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59041 00:07:03.180 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:07:03.180 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:03.180 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:07:03.180 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:03.180 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:07:03.180 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:03.180 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:07:03.180 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59041 00:07:03.180 element at address: 0x20000085df80 with size: 0.125549 MiB 00:07:03.180 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59041 00:07:03.180 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:07:03.180 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:03.180 element at address: 0x200028864140 with size: 0.023804 MiB 00:07:03.180 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:03.180 element at address: 0x200000859d40 with size: 0.016174 MiB 00:07:03.180 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59041 00:07:03.180 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:07:03.180 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:03.180 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:07:03.180 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59041 00:07:03.180 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:07:03.180 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59041 00:07:03.180 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:07:03.180 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59041 00:07:03.180 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:07:03.180 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:03.180 18:08:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:03.180 18:08:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59041 00:07:03.180 18:08:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59041 ']' 00:07:03.180 18:08:37 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59041 00:07:03.180 18:08:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:03.180 18:08:37 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.180 18:08:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59041 00:07:03.180 killing process with pid 59041 00:07:03.180 18:08:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.180 18:08:37 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.180 18:08:37 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59041' 00:07:03.180 18:08:37 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59041 00:07:03.180 18:08:37 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59041 00:07:05.710 00:07:05.710 real 0m4.167s 00:07:05.710 user 0m4.093s 00:07:05.710 sys 0m0.684s 00:07:05.710 ************************************ 00:07:05.710 END TEST dpdk_mem_utility 00:07:05.710 ************************************ 00:07:05.710 18:08:39 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.710 18:08:39 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:05.710 18:08:39 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:05.710 18:08:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.710 18:08:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.710 18:08:39 -- common/autotest_common.sh@10 -- # set +x 00:07:05.710 ************************************ 00:07:05.710 START TEST event 00:07:05.710 ************************************ 00:07:05.710 18:08:39 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:05.710 * Looking for test storage... 00:07:05.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:05.710 18:08:40 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.710 18:08:40 event -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.710 18:08:40 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.710 18:08:40 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.710 18:08:40 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.710 18:08:40 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.710 18:08:40 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.710 18:08:40 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.710 18:08:40 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.710 18:08:40 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.710 18:08:40 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.710 18:08:40 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.710 18:08:40 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.710 18:08:40 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.710 18:08:40 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.710 18:08:40 event -- scripts/common.sh@344 -- # case "$op" in 00:07:05.710 18:08:40 event -- scripts/common.sh@345 -- # : 1 00:07:05.710 18:08:40 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.710 18:08:40 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.710 18:08:40 event -- scripts/common.sh@365 -- # decimal 1 00:07:05.710 18:08:40 event -- scripts/common.sh@353 -- # local d=1 00:07:05.710 18:08:40 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.710 18:08:40 event -- scripts/common.sh@355 -- # echo 1 00:07:05.710 18:08:40 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.710 18:08:40 event -- scripts/common.sh@366 -- # decimal 2 00:07:05.710 18:08:40 event -- scripts/common.sh@353 -- # local d=2 00:07:05.710 18:08:40 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.710 18:08:40 event -- scripts/common.sh@355 -- # echo 2 00:07:05.710 18:08:40 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.710 18:08:40 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.710 18:08:40 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.710 18:08:40 event -- scripts/common.sh@368 -- # return 0 00:07:05.710 18:08:40 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.710 18:08:40 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:05.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.710 --rc genhtml_branch_coverage=1 00:07:05.710 --rc genhtml_function_coverage=1 00:07:05.710 --rc genhtml_legend=1 00:07:05.710 --rc geninfo_all_blocks=1 00:07:05.710 --rc geninfo_unexecuted_blocks=1 00:07:05.710 00:07:05.710 ' 00:07:05.710 18:08:40 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:05.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.710 --rc genhtml_branch_coverage=1 00:07:05.710 --rc genhtml_function_coverage=1 00:07:05.710 --rc genhtml_legend=1 00:07:05.710 --rc geninfo_all_blocks=1 00:07:05.710 --rc geninfo_unexecuted_blocks=1 00:07:05.710 00:07:05.710 ' 00:07:05.710 18:08:40 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:05.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.710 --rc genhtml_branch_coverage=1 00:07:05.710 --rc genhtml_function_coverage=1 00:07:05.710 --rc genhtml_legend=1 00:07:05.710 --rc geninfo_all_blocks=1 00:07:05.710 --rc geninfo_unexecuted_blocks=1 00:07:05.710 00:07:05.710 ' 00:07:05.710 18:08:40 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:05.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.710 --rc genhtml_branch_coverage=1 00:07:05.710 --rc genhtml_function_coverage=1 00:07:05.710 --rc genhtml_legend=1 00:07:05.710 --rc geninfo_all_blocks=1 00:07:05.710 --rc geninfo_unexecuted_blocks=1 00:07:05.710 00:07:05.710 ' 00:07:05.710 18:08:40 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:05.710 18:08:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:05.710 18:08:40 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:05.710 18:08:40 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:05.710 18:08:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.710 18:08:40 event -- common/autotest_common.sh@10 -- # set +x 00:07:05.710 ************************************ 00:07:05.710 START TEST event_perf 00:07:05.710 ************************************ 00:07:05.710 18:08:40 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:05.968 Running I/O for 1 seconds...[2024-11-26 18:08:40.191383] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:07:05.968 [2024-11-26 18:08:40.191611] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59149 ] 00:07:05.968 [2024-11-26 18:08:40.396584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.226 [2024-11-26 18:08:40.575762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.226 [2024-11-26 18:08:40.575877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.226 Running I/O for 1 seconds...[2024-11-26 18:08:40.575983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.226 [2024-11-26 18:08:40.576241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.600 00:07:07.600 lcore 0: 202972 00:07:07.600 lcore 1: 202975 00:07:07.600 lcore 2: 202973 00:07:07.600 lcore 3: 202975 00:07:07.600 done. 00:07:07.600 00:07:07.600 real 0m1.692s 00:07:07.600 user 0m4.421s 00:07:07.600 sys 0m0.146s 00:07:07.600 18:08:41 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.600 18:08:41 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:07.600 ************************************ 00:07:07.600 END TEST event_perf 00:07:07.600 ************************************ 00:07:07.600 18:08:41 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:07.600 18:08:41 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:07.600 18:08:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.600 18:08:41 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.600 ************************************ 00:07:07.600 START TEST event_reactor 00:07:07.600 ************************************ 00:07:07.600 18:08:41 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:07.600 [2024-11-26 18:08:41.920321] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:07:07.600 [2024-11-26 18:08:41.920480] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59188 ] 00:07:07.858 [2024-11-26 18:08:42.096196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.858 [2024-11-26 18:08:42.232485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.234 test_start 00:07:09.234 oneshot 00:07:09.234 tick 100 00:07:09.234 tick 100 00:07:09.234 tick 250 00:07:09.234 tick 100 00:07:09.234 tick 100 00:07:09.234 tick 100 00:07:09.234 tick 250 00:07:09.234 tick 500 00:07:09.234 tick 100 00:07:09.234 tick 100 00:07:09.234 tick 250 00:07:09.234 tick 100 00:07:09.234 tick 100 00:07:09.234 test_end 00:07:09.235 00:07:09.235 real 0m1.584s 00:07:09.235 user 0m1.377s 00:07:09.235 sys 0m0.098s 00:07:09.235 18:08:43 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.235 ************************************ 00:07:09.235 END TEST event_reactor 00:07:09.235 ************************************ 00:07:09.235 18:08:43 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:09.235 18:08:43 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:09.235 18:08:43 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:09.235 18:08:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.235 18:08:43 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.235 ************************************ 00:07:09.235 START TEST event_reactor_perf 00:07:09.235 ************************************ 00:07:09.235 18:08:43 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:09.235 [2024-11-26 18:08:43.560673] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:07:09.235 [2024-11-26 18:08:43.560915] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59230 ] 00:07:09.493 [2024-11-26 18:08:43.753749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.493 [2024-11-26 18:08:43.889328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.872 test_start 00:07:10.872 test_end 00:07:10.872 Performance: 284573 events per second 00:07:10.872 00:07:10.872 real 0m1.595s 00:07:10.872 user 0m1.368s 00:07:10.872 sys 0m0.118s 00:07:10.872 18:08:45 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.872 18:08:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:10.872 ************************************ 00:07:10.872 END TEST event_reactor_perf 00:07:10.872 ************************************ 00:07:10.872 18:08:45 event -- event/event.sh@49 -- # uname -s 00:07:10.872 18:08:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:10.872 18:08:45 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:10.872 18:08:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.872 18:08:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.872 18:08:45 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.872 ************************************ 00:07:10.872 START TEST event_scheduler 00:07:10.872 ************************************ 00:07:10.872 18:08:45 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:10.872 * Looking for test storage... 00:07:10.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:10.872 18:08:45 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.872 18:08:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.872 18:08:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.872 18:08:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:10.872 18:08:45 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.873 18:08:45 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.873 18:08:45 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.873 18:08:45 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:10.873 18:08:45 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.873 18:08:45 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.873 --rc genhtml_branch_coverage=1 00:07:10.873 --rc genhtml_function_coverage=1 00:07:10.873 --rc genhtml_legend=1 00:07:10.873 --rc geninfo_all_blocks=1 00:07:10.873 --rc geninfo_unexecuted_blocks=1 00:07:10.873 00:07:10.873 ' 00:07:10.873 18:08:45 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.873 --rc genhtml_branch_coverage=1 00:07:10.873 --rc genhtml_function_coverage=1 00:07:10.873 --rc genhtml_legend=1 00:07:10.873 --rc geninfo_all_blocks=1 00:07:10.873 --rc geninfo_unexecuted_blocks=1 00:07:10.873 00:07:10.873 ' 00:07:11.131 18:08:45 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:11.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.131 --rc genhtml_branch_coverage=1 00:07:11.131 --rc genhtml_function_coverage=1 00:07:11.131 --rc genhtml_legend=1 00:07:11.131 --rc geninfo_all_blocks=1 00:07:11.131 --rc geninfo_unexecuted_blocks=1 00:07:11.131 00:07:11.131 ' 00:07:11.131 18:08:45 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:11.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.131 --rc genhtml_branch_coverage=1 00:07:11.131 --rc genhtml_function_coverage=1 00:07:11.131 --rc genhtml_legend=1 00:07:11.131 --rc geninfo_all_blocks=1 00:07:11.131 --rc geninfo_unexecuted_blocks=1 00:07:11.131 00:07:11.131 ' 00:07:11.131 18:08:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:11.131 18:08:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59301 00:07:11.131 18:08:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.131 18:08:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59301 00:07:11.131 18:08:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:11.131 18:08:45 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59301 ']' 00:07:11.131 18:08:45 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.131 18:08:45 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.131 18:08:45 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.131 18:08:45 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.131 18:08:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:11.131 [2024-11-26 18:08:45.477987] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:07:11.131 [2024-11-26 18:08:45.478265] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59301 ] 00:07:11.390 [2024-11-26 18:08:45.662366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.390 [2024-11-26 18:08:45.804623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.390 [2024-11-26 18:08:45.805191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.390 [2024-11-26 18:08:45.805350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.390 [2024-11-26 18:08:45.805358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.957 18:08:46 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.957 18:08:46 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:11.957 18:08:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:11.957 18:08:46 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.957 18:08:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:11.957 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:11.957 POWER: Cannot set governor of lcore 0 to userspace 00:07:11.957 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:11.957 POWER: Cannot set governor of lcore 0 to performance 00:07:11.957 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:11.957 POWER: Cannot set governor of lcore 0 to userspace 00:07:11.957 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:11.957 POWER: Cannot set governor of lcore 0 to userspace 00:07:11.957 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:11.957 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:11.957 POWER: Unable to set Power Management Environment for lcore 0 00:07:11.957 [2024-11-26 18:08:46.399947] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:07:11.957 [2024-11-26 18:08:46.399980] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:07:11.957 [2024-11-26 18:08:46.399994] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:11.957 [2024-11-26 18:08:46.400038] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:11.957 [2024-11-26 18:08:46.400051] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:11.957 [2024-11-26 18:08:46.400065] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:11.957 18:08:46 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.957 18:08:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:11.957 18:08:46 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.957 18:08:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:12.522 [2024-11-26 18:08:46.745204] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:12.522 18:08:46 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.523 18:08:46 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:12.523 18:08:46 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:12.523 18:08:46 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:12.523 18:08:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:12.523 ************************************ 00:07:12.523 START TEST scheduler_create_thread 00:07:12.523 ************************************ 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.523 2 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.523 3 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.523 4 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.523 5 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.523 6 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.523 7 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.523 8 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.523 9 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.523 10 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.523 18:08:46 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.898 18:08:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.898 00:07:13.898 real 0m1.175s 00:07:13.898 user 0m0.019s 00:07:13.898 sys 0m0.007s 00:07:13.898 ************************************ 00:07:13.898 END TEST scheduler_create_thread 00:07:13.898 ************************************ 00:07:13.898 18:08:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.898 18:08:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.898 18:08:47 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:13.898 18:08:47 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59301 00:07:13.898 18:08:47 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59301 ']' 00:07:13.898 18:08:47 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59301 00:07:13.898 18:08:47 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:13.898 18:08:47 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.898 18:08:47 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59301 00:07:13.898 18:08:48 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:13.898 18:08:48 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:13.898 killing process with pid 59301 00:07:13.898 18:08:48 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59301' 00:07:13.898 18:08:48 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59301 00:07:13.898 18:08:48 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59301 00:07:14.157 [2024-11-26 18:08:48.412693] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:15.091 00:07:15.091 real 0m4.382s 00:07:15.091 user 0m7.528s 00:07:15.091 sys 0m0.535s 00:07:15.091 18:08:49 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.091 ************************************ 00:07:15.091 END TEST event_scheduler 00:07:15.091 ************************************ 00:07:15.091 18:08:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:15.348 18:08:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:15.348 18:08:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:15.348 18:08:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:15.348 18:08:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.348 18:08:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:15.348 ************************************ 00:07:15.348 START TEST app_repeat 00:07:15.349 ************************************ 00:07:15.349 18:08:49 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:15.349 18:08:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.349 18:08:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:15.349 18:08:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:15.349 18:08:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:15.349 18:08:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:15.349 18:08:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:15.349 18:08:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:15.349 18:08:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59396 00:07:15.349 18:08:49 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:15.349 18:08:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:15.349 18:08:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59396' 00:07:15.349 Process app_repeat pid: 59396 00:07:15.349 18:08:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:15.349 spdk_app_start Round 0 00:07:15.349 18:08:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:15.349 18:08:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59396 /var/tmp/spdk-nbd.sock 00:07:15.349 18:08:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59396 ']' 00:07:15.349 18:08:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:15.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:15.349 18:08:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.349 18:08:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:15.349 18:08:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.349 18:08:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:15.349 [2024-11-26 18:08:49.667171] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:07:15.349 [2024-11-26 18:08:49.667355] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59396 ] 00:07:15.606 [2024-11-26 18:08:49.860532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:15.606 [2024-11-26 18:08:50.013597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.606 [2024-11-26 18:08:50.013607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.567 18:08:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.567 18:08:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:16.567 18:08:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:16.826 Malloc0 00:07:16.826 18:08:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.085 Malloc1 00:07:17.085 18:08:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.085 18:08:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.085 18:08:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.085 18:08:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.085 18:08:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.085 18:08:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.085 18:08:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.085 18:08:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.085 18:08:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.085 18:08:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.085 18:08:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.085 18:08:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.085 18:08:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:17.085 18:08:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.085 18:08:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.085 18:08:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:17.343 /dev/nbd0 00:07:17.343 18:08:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.343 18:08:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.343 18:08:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:17.343 18:08:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:17.343 18:08:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.343 18:08:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.343 18:08:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:17.343 18:08:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:17.343 18:08:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.343 18:08:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.343 18:08:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.343 1+0 records in 00:07:17.343 1+0 records out 00:07:17.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248054 s, 16.5 MB/s 00:07:17.343 18:08:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.343 18:08:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:17.343 18:08:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.343 18:08:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.343 18:08:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:17.343 18:08:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.343 18:08:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.343 18:08:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:17.907 /dev/nbd1 00:07:17.907 18:08:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:17.907 18:08:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:17.908 18:08:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:17.908 18:08:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:17.908 18:08:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.908 18:08:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.908 18:08:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:17.908 18:08:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:17.908 18:08:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.908 18:08:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.908 18:08:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.908 1+0 records in 00:07:17.908 1+0 records out 00:07:17.908 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312436 s, 13.1 MB/s 00:07:17.908 18:08:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.908 18:08:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:17.908 18:08:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.908 18:08:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.908 18:08:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:17.908 18:08:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.908 18:08:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.908 18:08:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.908 18:08:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.908 18:08:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.165 18:08:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:18.165 { 00:07:18.165 "nbd_device": "/dev/nbd0", 00:07:18.165 "bdev_name": "Malloc0" 00:07:18.165 }, 00:07:18.165 { 00:07:18.165 "nbd_device": "/dev/nbd1", 00:07:18.165 "bdev_name": "Malloc1" 00:07:18.165 } 00:07:18.165 ]' 00:07:18.165 18:08:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:18.165 { 00:07:18.165 "nbd_device": "/dev/nbd0", 00:07:18.165 "bdev_name": "Malloc0" 00:07:18.165 }, 00:07:18.165 { 00:07:18.165 "nbd_device": "/dev/nbd1", 00:07:18.165 "bdev_name": "Malloc1" 00:07:18.165 } 00:07:18.165 ]' 00:07:18.165 18:08:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.165 18:08:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:18.165 /dev/nbd1' 00:07:18.165 18:08:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.165 18:08:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:18.165 /dev/nbd1' 00:07:18.165 18:08:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:18.165 18:08:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:18.165 18:08:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:18.165 18:08:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:18.165 18:08:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:18.165 18:08:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.165 18:08:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.165 18:08:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:18.165 18:08:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.165 18:08:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:18.166 18:08:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:18.166 256+0 records in 00:07:18.166 256+0 records out 00:07:18.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00660872 s, 159 MB/s 00:07:18.166 18:08:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.166 18:08:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.166 256+0 records in 00:07:18.166 256+0 records out 00:07:18.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0330494 s, 31.7 MB/s 00:07:18.166 18:08:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.166 18:08:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.424 256+0 records in 00:07:18.424 256+0 records out 00:07:18.424 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0357374 s, 29.3 MB/s 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.424 18:08:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.682 18:08:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.682 18:08:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.682 18:08:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.682 18:08:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.682 18:08:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.682 18:08:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.682 18:08:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.682 18:08:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.682 18:08:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.682 18:08:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.939 18:08:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.939 18:08:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.939 18:08:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.939 18:08:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.939 18:08:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.939 18:08:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.939 18:08:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.939 18:08:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.939 18:08:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.939 18:08:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.939 18:08:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.197 18:08:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:19.197 18:08:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:19.197 18:08:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.197 18:08:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:19.197 18:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:19.197 18:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.197 18:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:19.197 18:08:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:19.197 18:08:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:19.197 18:08:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:19.197 18:08:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:19.197 18:08:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:19.197 18:08:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:19.762 18:08:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:21.135 [2024-11-26 18:08:55.217916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.135 [2024-11-26 18:08:55.350059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.135 [2024-11-26 18:08:55.350068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.135 [2024-11-26 18:08:55.542393] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:21.135 [2024-11-26 18:08:55.542541] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:23.037 18:08:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:23.037 spdk_app_start Round 1 00:07:23.037 18:08:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:23.037 18:08:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59396 /var/tmp/spdk-nbd.sock 00:07:23.037 18:08:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59396 ']' 00:07:23.037 18:08:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:23.037 18:08:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:23.037 18:08:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:23.037 18:08:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.037 18:08:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:23.037 18:08:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.037 18:08:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:23.037 18:08:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:23.295 Malloc0 00:07:23.554 18:08:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:23.812 Malloc1 00:07:23.812 18:08:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:23.812 18:08:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.812 18:08:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:23.812 18:08:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:23.812 18:08:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.812 18:08:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:23.812 18:08:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:23.812 18:08:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.812 18:08:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:23.812 18:08:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:23.812 18:08:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:23.812 18:08:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:23.812 18:08:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:23.812 18:08:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:23.812 18:08:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:23.812 18:08:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:24.070 /dev/nbd0 00:07:24.070 18:08:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:24.070 18:08:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:24.070 18:08:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:24.070 18:08:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:24.070 18:08:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.070 18:08:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.070 18:08:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:24.070 18:08:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:24.070 18:08:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.070 18:08:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.070 18:08:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:24.070 1+0 records in 00:07:24.070 1+0 records out 00:07:24.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310263 s, 13.2 MB/s 00:07:24.070 18:08:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:24.070 18:08:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:24.070 18:08:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:24.070 18:08:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.070 18:08:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:24.070 18:08:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.070 18:08:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:24.070 18:08:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:24.329 /dev/nbd1 00:07:24.588 18:08:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:24.588 18:08:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:24.588 18:08:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:24.588 18:08:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:24.588 18:08:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.588 18:08:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.588 18:08:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:24.588 18:08:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:24.588 18:08:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.588 18:08:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.588 18:08:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:24.588 1+0 records in 00:07:24.588 1+0 records out 00:07:24.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373805 s, 11.0 MB/s 00:07:24.588 18:08:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:24.588 18:08:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:24.588 18:08:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:24.588 18:08:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.588 18:08:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:24.588 18:08:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.588 18:08:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:24.588 18:08:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.588 18:08:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.588 18:08:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:24.846 { 00:07:24.846 "nbd_device": "/dev/nbd0", 00:07:24.846 "bdev_name": "Malloc0" 00:07:24.846 }, 00:07:24.846 { 00:07:24.846 "nbd_device": "/dev/nbd1", 00:07:24.846 "bdev_name": "Malloc1" 00:07:24.846 } 00:07:24.846 ]' 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:24.846 { 00:07:24.846 "nbd_device": "/dev/nbd0", 00:07:24.846 "bdev_name": "Malloc0" 00:07:24.846 }, 00:07:24.846 { 00:07:24.846 "nbd_device": "/dev/nbd1", 00:07:24.846 "bdev_name": "Malloc1" 00:07:24.846 } 00:07:24.846 ]' 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:24.846 /dev/nbd1' 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:24.846 /dev/nbd1' 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:24.846 256+0 records in 00:07:24.846 256+0 records out 00:07:24.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00934385 s, 112 MB/s 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:24.846 256+0 records in 00:07:24.846 256+0 records out 00:07:24.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227238 s, 46.1 MB/s 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:24.846 256+0 records in 00:07:24.846 256+0 records out 00:07:24.846 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299337 s, 35.0 MB/s 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:24.846 18:08:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:25.410 18:08:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:25.410 18:08:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:25.410 18:08:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:25.410 18:08:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.410 18:08:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.410 18:08:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:25.410 18:08:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:25.410 18:08:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.410 18:08:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.411 18:08:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:25.668 18:08:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:25.668 18:08:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:25.668 18:08:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:25.668 18:08:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.668 18:08:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.668 18:08:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:25.668 18:08:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:25.668 18:08:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.668 18:08:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.668 18:08:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.668 18:08:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.925 18:09:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:25.925 18:09:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:25.925 18:09:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.925 18:09:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:25.925 18:09:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.925 18:09:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:25.925 18:09:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:25.925 18:09:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:25.925 18:09:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:25.925 18:09:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:25.925 18:09:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:25.925 18:09:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:25.925 18:09:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:26.489 18:09:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:27.860 [2024-11-26 18:09:01.890071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:27.860 [2024-11-26 18:09:02.023585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.860 [2024-11-26 18:09:02.023605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.860 [2024-11-26 18:09:02.219597] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:27.860 [2024-11-26 18:09:02.219684] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:29.764 spdk_app_start Round 2 00:07:29.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:29.764 18:09:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:29.764 18:09:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:29.764 18:09:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59396 /var/tmp/spdk-nbd.sock 00:07:29.764 18:09:03 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59396 ']' 00:07:29.764 18:09:03 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:29.764 18:09:03 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.764 18:09:03 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:29.764 18:09:03 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.764 18:09:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:29.764 18:09:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.764 18:09:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:29.764 18:09:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:30.332 Malloc0 00:07:30.332 18:09:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:30.603 Malloc1 00:07:30.603 18:09:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:30.603 18:09:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.603 18:09:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:30.603 18:09:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:30.603 18:09:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.603 18:09:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:30.603 18:09:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:30.603 18:09:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.603 18:09:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:30.603 18:09:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:30.603 18:09:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:30.603 18:09:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:30.603 18:09:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:30.603 18:09:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:30.603 18:09:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:30.603 18:09:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:30.867 /dev/nbd0 00:07:30.867 18:09:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:30.867 18:09:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:30.867 18:09:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:30.868 18:09:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:30.868 18:09:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:30.868 18:09:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:30.868 18:09:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:30.868 18:09:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:30.868 18:09:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:30.868 18:09:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:30.868 18:09:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:30.868 1+0 records in 00:07:30.868 1+0 records out 00:07:30.868 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315173 s, 13.0 MB/s 00:07:30.868 18:09:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:30.868 18:09:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:30.868 18:09:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:30.868 18:09:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:30.868 18:09:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:30.868 18:09:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:30.868 18:09:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:30.868 18:09:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:31.126 /dev/nbd1 00:07:31.384 18:09:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:31.384 18:09:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:31.384 18:09:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:31.384 18:09:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:31.384 18:09:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:31.384 18:09:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:31.384 18:09:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:31.384 18:09:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:31.384 18:09:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:31.384 18:09:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:31.384 18:09:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:31.384 1+0 records in 00:07:31.384 1+0 records out 00:07:31.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301513 s, 13.6 MB/s 00:07:31.384 18:09:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.384 18:09:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:31.384 18:09:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:31.384 18:09:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:31.384 18:09:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:31.384 18:09:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:31.384 18:09:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:31.384 18:09:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:31.384 18:09:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.384 18:09:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:31.643 { 00:07:31.643 "nbd_device": "/dev/nbd0", 00:07:31.643 "bdev_name": "Malloc0" 00:07:31.643 }, 00:07:31.643 { 00:07:31.643 "nbd_device": "/dev/nbd1", 00:07:31.643 "bdev_name": "Malloc1" 00:07:31.643 } 00:07:31.643 ]' 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:31.643 { 00:07:31.643 "nbd_device": "/dev/nbd0", 00:07:31.643 "bdev_name": "Malloc0" 00:07:31.643 }, 00:07:31.643 { 00:07:31.643 "nbd_device": "/dev/nbd1", 00:07:31.643 "bdev_name": "Malloc1" 00:07:31.643 } 00:07:31.643 ]' 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:31.643 /dev/nbd1' 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:31.643 /dev/nbd1' 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:31.643 256+0 records in 00:07:31.643 256+0 records out 00:07:31.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00847768 s, 124 MB/s 00:07:31.643 18:09:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:31.644 18:09:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:31.644 256+0 records in 00:07:31.644 256+0 records out 00:07:31.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273773 s, 38.3 MB/s 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:31.644 256+0 records in 00:07:31.644 256+0 records out 00:07:31.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.030323 s, 34.6 MB/s 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.644 18:09:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:32.210 18:09:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:32.210 18:09:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:32.210 18:09:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:32.210 18:09:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.210 18:09:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.210 18:09:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:32.210 18:09:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:32.210 18:09:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.210 18:09:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.210 18:09:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:32.467 18:09:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:32.467 18:09:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:32.467 18:09:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:32.467 18:09:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.467 18:09:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.467 18:09:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:32.467 18:09:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:32.467 18:09:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.467 18:09:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.467 18:09:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.467 18:09:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.725 18:09:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:32.725 18:09:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:32.725 18:09:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.725 18:09:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:32.725 18:09:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:32.725 18:09:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.725 18:09:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:32.725 18:09:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:32.725 18:09:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:32.725 18:09:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:32.725 18:09:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:32.725 18:09:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:32.725 18:09:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:33.291 18:09:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:34.718 [2024-11-26 18:09:08.795806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:34.718 [2024-11-26 18:09:08.928326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.718 [2024-11-26 18:09:08.928339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.718 [2024-11-26 18:09:09.127376] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:34.718 [2024-11-26 18:09:09.127905] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:36.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:36.619 18:09:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59396 /var/tmp/spdk-nbd.sock 00:07:36.619 18:09:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59396 ']' 00:07:36.619 18:09:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:36.619 18:09:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.619 18:09:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:36.619 18:09:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.619 18:09:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:36.619 18:09:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.619 18:09:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:36.619 18:09:11 event.app_repeat -- event/event.sh@39 -- # killprocess 59396 00:07:36.619 18:09:11 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59396 ']' 00:07:36.619 18:09:11 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59396 00:07:36.619 18:09:11 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:36.619 18:09:11 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.619 18:09:11 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59396 00:07:36.619 18:09:11 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.619 killing process with pid 59396 00:07:36.619 18:09:11 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.619 18:09:11 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59396' 00:07:36.619 18:09:11 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59396 00:07:36.619 18:09:11 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59396 00:07:37.994 spdk_app_start is called in Round 0. 00:07:37.994 Shutdown signal received, stop current app iteration 00:07:37.994 Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 reinitialization... 00:07:37.994 spdk_app_start is called in Round 1. 00:07:37.994 Shutdown signal received, stop current app iteration 00:07:37.994 Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 reinitialization... 00:07:37.994 spdk_app_start is called in Round 2. 00:07:37.994 Shutdown signal received, stop current app iteration 00:07:37.994 Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 reinitialization... 00:07:37.994 spdk_app_start is called in Round 3. 00:07:37.994 Shutdown signal received, stop current app iteration 00:07:37.994 ************************************ 00:07:37.994 END TEST app_repeat 00:07:37.994 ************************************ 00:07:37.994 18:09:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:37.994 18:09:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:37.994 00:07:37.994 real 0m22.474s 00:07:37.994 user 0m49.998s 00:07:37.994 sys 0m3.301s 00:07:37.994 18:09:12 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.994 18:09:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:37.994 18:09:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:37.994 18:09:12 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:37.994 18:09:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.994 18:09:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.994 18:09:12 event -- common/autotest_common.sh@10 -- # set +x 00:07:37.994 ************************************ 00:07:37.994 START TEST cpu_locks 00:07:37.994 ************************************ 00:07:37.994 18:09:12 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:37.994 * Looking for test storage... 00:07:37.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:37.994 18:09:12 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:37.994 18:09:12 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:37.994 18:09:12 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:37.994 18:09:12 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.994 18:09:12 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:37.994 18:09:12 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.994 18:09:12 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:37.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.994 --rc genhtml_branch_coverage=1 00:07:37.994 --rc genhtml_function_coverage=1 00:07:37.994 --rc genhtml_legend=1 00:07:37.994 --rc geninfo_all_blocks=1 00:07:37.994 --rc geninfo_unexecuted_blocks=1 00:07:37.994 00:07:37.994 ' 00:07:37.994 18:09:12 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:37.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.994 --rc genhtml_branch_coverage=1 00:07:37.994 --rc genhtml_function_coverage=1 00:07:37.994 --rc genhtml_legend=1 00:07:37.994 --rc geninfo_all_blocks=1 00:07:37.994 --rc geninfo_unexecuted_blocks=1 00:07:37.994 00:07:37.994 ' 00:07:37.995 18:09:12 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:37.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.995 --rc genhtml_branch_coverage=1 00:07:37.995 --rc genhtml_function_coverage=1 00:07:37.995 --rc genhtml_legend=1 00:07:37.995 --rc geninfo_all_blocks=1 00:07:37.995 --rc geninfo_unexecuted_blocks=1 00:07:37.995 00:07:37.995 ' 00:07:37.995 18:09:12 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:37.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.995 --rc genhtml_branch_coverage=1 00:07:37.995 --rc genhtml_function_coverage=1 00:07:37.995 --rc genhtml_legend=1 00:07:37.995 --rc geninfo_all_blocks=1 00:07:37.995 --rc geninfo_unexecuted_blocks=1 00:07:37.995 00:07:37.995 ' 00:07:37.995 18:09:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:37.995 18:09:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:37.995 18:09:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:37.995 18:09:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:37.995 18:09:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:37.995 18:09:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.995 18:09:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:37.995 ************************************ 00:07:37.995 START TEST default_locks 00:07:37.995 ************************************ 00:07:37.995 18:09:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:37.995 18:09:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59878 00:07:37.995 18:09:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59878 00:07:37.995 18:09:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:37.995 18:09:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59878 ']' 00:07:37.995 18:09:12 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.995 18:09:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.995 18:09:12 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.995 18:09:12 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.995 18:09:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.253 [2024-11-26 18:09:12.454216] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:07:38.253 [2024-11-26 18:09:12.454421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59878 ] 00:07:38.253 [2024-11-26 18:09:12.651646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.512 [2024-11-26 18:09:12.816356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.446 18:09:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.446 18:09:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:39.446 18:09:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59878 00:07:39.446 18:09:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59878 00:07:39.446 18:09:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:40.013 18:09:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59878 00:07:40.013 18:09:14 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59878 ']' 00:07:40.013 18:09:14 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59878 00:07:40.013 18:09:14 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:40.013 18:09:14 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.013 18:09:14 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59878 00:07:40.013 18:09:14 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.013 killing process with pid 59878 00:07:40.013 18:09:14 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.013 18:09:14 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59878' 00:07:40.013 18:09:14 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59878 00:07:40.013 18:09:14 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59878 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59878 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59878 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59878 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59878 ']' 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.542 ERROR: process (pid: 59878) is no longer running 00:07:42.542 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59878) - No such process 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:42.542 00:07:42.542 real 0m4.192s 00:07:42.542 user 0m4.244s 00:07:42.542 sys 0m0.798s 00:07:42.542 ************************************ 00:07:42.542 END TEST default_locks 00:07:42.542 ************************************ 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.542 18:09:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.542 18:09:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:42.542 18:09:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.542 18:09:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.542 18:09:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.542 ************************************ 00:07:42.542 START TEST default_locks_via_rpc 00:07:42.542 ************************************ 00:07:42.542 18:09:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:42.542 18:09:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59953 00:07:42.542 18:09:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59953 00:07:42.542 18:09:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:42.542 18:09:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59953 ']' 00:07:42.542 18:09:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.543 18:09:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.543 18:09:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.543 18:09:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.543 18:09:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.543 [2024-11-26 18:09:16.700893] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:07:42.543 [2024-11-26 18:09:16.701110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59953 ] 00:07:42.543 [2024-11-26 18:09:16.888968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.800 [2024-11-26 18:09:17.023441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.754 18:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.754 18:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:43.754 18:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:43.754 18:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.754 18:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.754 18:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.754 18:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:43.754 18:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:43.754 18:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:43.754 18:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:43.754 18:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:43.754 18:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.754 18:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.754 18:09:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.754 18:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59953 00:07:43.754 18:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59953 00:07:43.754 18:09:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:44.012 18:09:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59953 00:07:44.012 18:09:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59953 ']' 00:07:44.012 18:09:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59953 00:07:44.012 18:09:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:44.012 18:09:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.012 18:09:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59953 00:07:44.012 killing process with pid 59953 00:07:44.012 18:09:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.012 18:09:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.012 18:09:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59953' 00:07:44.012 18:09:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59953 00:07:44.012 18:09:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59953 00:07:46.536 00:07:46.536 real 0m4.093s 00:07:46.536 user 0m4.116s 00:07:46.536 sys 0m0.772s 00:07:46.536 18:09:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.536 18:09:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.536 ************************************ 00:07:46.536 END TEST default_locks_via_rpc 00:07:46.536 ************************************ 00:07:46.536 18:09:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:46.536 18:09:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.536 18:09:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.536 18:09:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.536 ************************************ 00:07:46.536 START TEST non_locking_app_on_locked_coremask 00:07:46.536 ************************************ 00:07:46.536 18:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:46.536 18:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60029 00:07:46.536 18:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60029 /var/tmp/spdk.sock 00:07:46.536 18:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:46.536 18:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60029 ']' 00:07:46.536 18:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.536 18:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.536 18:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.536 18:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.536 18:09:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.537 [2024-11-26 18:09:20.850943] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:07:46.537 [2024-11-26 18:09:20.851145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60029 ] 00:07:46.794 [2024-11-26 18:09:21.032542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.794 [2024-11-26 18:09:21.164322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:47.767 18:09:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.767 18:09:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:47.767 18:09:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60055 00:07:47.767 18:09:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:47.767 18:09:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60055 /var/tmp/spdk2.sock 00:07:47.767 18:09:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60055 ']' 00:07:47.767 18:09:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:47.767 18:09:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.767 18:09:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:47.767 18:09:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.767 18:09:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:47.767 [2024-11-26 18:09:22.185309] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:07:47.767 [2024-11-26 18:09:22.185907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60055 ] 00:07:48.025 [2024-11-26 18:09:22.392392] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:48.025 [2024-11-26 18:09:22.392468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.283 [2024-11-26 18:09:22.716480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.811 18:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.811 18:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:50.811 18:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60029 00:07:50.811 18:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60029 00:07:50.811 18:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:51.751 18:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60029 00:07:51.751 18:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60029 ']' 00:07:51.751 18:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60029 00:07:51.751 18:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:51.751 18:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.751 18:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60029 00:07:51.751 killing process with pid 60029 00:07:51.751 18:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.751 18:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.751 18:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60029' 00:07:51.752 18:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60029 00:07:51.752 18:09:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60029 00:07:57.016 18:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60055 00:07:57.016 18:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60055 ']' 00:07:57.016 18:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60055 00:07:57.016 18:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:57.016 18:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.017 18:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60055 00:07:57.017 killing process with pid 60055 00:07:57.017 18:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:57.017 18:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:57.017 18:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60055' 00:07:57.017 18:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60055 00:07:57.017 18:09:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60055 00:07:58.910 00:07:58.910 real 0m12.158s 00:07:58.910 user 0m12.766s 00:07:58.910 sys 0m1.595s 00:07:58.910 18:09:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.910 ************************************ 00:07:58.910 END TEST non_locking_app_on_locked_coremask 00:07:58.910 ************************************ 00:07:58.910 18:09:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.910 18:09:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:58.910 18:09:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.910 18:09:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.910 18:09:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.910 ************************************ 00:07:58.910 START TEST locking_app_on_unlocked_coremask 00:07:58.910 ************************************ 00:07:58.910 18:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:58.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.910 18:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60205 00:07:58.910 18:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60205 /var/tmp/spdk.sock 00:07:58.910 18:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:58.910 18:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60205 ']' 00:07:58.910 18:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.910 18:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.910 18:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.910 18:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.910 18:09:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:58.910 [2024-11-26 18:09:33.098823] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:07:58.910 [2024-11-26 18:09:33.100291] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60205 ] 00:07:58.910 [2024-11-26 18:09:33.289530] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:58.910 [2024-11-26 18:09:33.289900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.168 [2024-11-26 18:09:33.451162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.101 18:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.101 18:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:00.101 18:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60227 00:08:00.101 18:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:00.102 18:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60227 /var/tmp/spdk2.sock 00:08:00.102 18:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60227 ']' 00:08:00.102 18:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:00.102 18:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.102 18:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:00.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:00.102 18:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.102 18:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:00.102 [2024-11-26 18:09:34.559272] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:08:00.102 [2024-11-26 18:09:34.559783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60227 ] 00:08:00.359 [2024-11-26 18:09:34.767500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.617 [2024-11-26 18:09:35.064269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.144 18:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.144 18:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:03.144 18:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60227 00:08:03.144 18:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60227 00:08:03.144 18:09:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:03.709 18:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60205 00:08:03.709 18:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60205 ']' 00:08:03.709 18:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60205 00:08:03.709 18:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:03.709 18:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.709 18:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60205 00:08:03.709 18:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.709 18:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.709 18:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60205' 00:08:03.709 killing process with pid 60205 00:08:03.709 18:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60205 00:08:03.709 18:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60205 00:08:09.017 18:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60227 00:08:09.017 18:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60227 ']' 00:08:09.017 18:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60227 00:08:09.017 18:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:09.017 18:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.017 18:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60227 00:08:09.017 killing process with pid 60227 00:08:09.017 18:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.017 18:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.017 18:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60227' 00:08:09.017 18:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60227 00:08:09.017 18:09:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60227 00:08:10.947 ************************************ 00:08:10.947 END TEST locking_app_on_unlocked_coremask 00:08:10.947 ************************************ 00:08:10.947 00:08:10.947 real 0m12.064s 00:08:10.947 user 0m12.490s 00:08:10.947 sys 0m1.623s 00:08:10.947 18:09:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.947 18:09:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.947 18:09:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:10.947 18:09:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.947 18:09:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.947 18:09:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:10.947 ************************************ 00:08:10.947 START TEST locking_app_on_locked_coremask 00:08:10.947 ************************************ 00:08:10.947 18:09:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:08:10.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.947 18:09:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60375 00:08:10.947 18:09:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:10.947 18:09:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60375 /var/tmp/spdk.sock 00:08:10.947 18:09:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60375 ']' 00:08:10.947 18:09:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.947 18:09:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.947 18:09:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.947 18:09:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.948 18:09:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.948 [2024-11-26 18:09:45.208836] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:08:10.948 [2024-11-26 18:09:45.209366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60375 ] 00:08:10.948 [2024-11-26 18:09:45.402740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.206 [2024-11-26 18:09:45.565383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60402 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60402 /var/tmp/spdk2.sock 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60402 /var/tmp/spdk2.sock 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60402 /var/tmp/spdk2.sock 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60402 ']' 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:12.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.140 18:09:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:12.399 [2024-11-26 18:09:46.671856] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:08:12.399 [2024-11-26 18:09:46.673071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60402 ] 00:08:12.657 [2024-11-26 18:09:46.884426] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60375 has claimed it. 00:08:12.657 [2024-11-26 18:09:46.884538] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:12.916 ERROR: process (pid: 60402) is no longer running 00:08:12.916 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60402) - No such process 00:08:12.916 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.916 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:12.916 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:12.916 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:12.916 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:12.916 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:12.916 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60375 00:08:12.916 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:12.916 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60375 00:08:13.481 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60375 00:08:13.481 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60375 ']' 00:08:13.481 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60375 00:08:13.481 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:13.481 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.481 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60375 00:08:13.481 killing process with pid 60375 00:08:13.481 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.481 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.481 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60375' 00:08:13.481 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60375 00:08:13.481 18:09:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60375 00:08:16.018 ************************************ 00:08:16.018 END TEST locking_app_on_locked_coremask 00:08:16.018 ************************************ 00:08:16.018 00:08:16.018 real 0m5.079s 00:08:16.018 user 0m5.361s 00:08:16.018 sys 0m0.992s 00:08:16.018 18:09:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.018 18:09:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:16.018 18:09:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:16.018 18:09:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.018 18:09:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.018 18:09:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:16.018 ************************************ 00:08:16.018 START TEST locking_overlapped_coremask 00:08:16.018 ************************************ 00:08:16.018 18:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:16.018 18:09:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60466 00:08:16.018 18:09:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60466 /var/tmp/spdk.sock 00:08:16.018 18:09:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:16.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.018 18:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60466 ']' 00:08:16.018 18:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.018 18:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.018 18:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.018 18:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.018 18:09:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:16.018 [2024-11-26 18:09:50.314041] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:08:16.018 [2024-11-26 18:09:50.314548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60466 ] 00:08:16.276 [2024-11-26 18:09:50.503023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:16.276 [2024-11-26 18:09:50.645675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.276 [2024-11-26 18:09:50.645839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.276 [2024-11-26 18:09:50.645855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.210 18:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.211 18:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:17.211 18:09:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60489 00:08:17.211 18:09:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60489 /var/tmp/spdk2.sock 00:08:17.211 18:09:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:17.211 18:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:17.211 18:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60489 /var/tmp/spdk2.sock 00:08:17.211 18:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:17.211 18:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.211 18:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:17.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:17.211 18:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.211 18:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60489 /var/tmp/spdk2.sock 00:08:17.211 18:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60489 ']' 00:08:17.211 18:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:17.211 18:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.211 18:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:17.211 18:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.211 18:09:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:17.469 [2024-11-26 18:09:51.682712] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:08:17.469 [2024-11-26 18:09:51.683927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60489 ] 00:08:17.469 [2024-11-26 18:09:51.887519] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60466 has claimed it. 00:08:17.469 [2024-11-26 18:09:51.891630] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:18.035 ERROR: process (pid: 60489) is no longer running 00:08:18.035 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60489) - No such process 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60466 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60466 ']' 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60466 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60466 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60466' 00:08:18.035 killing process with pid 60466 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60466 00:08:18.035 18:09:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60466 00:08:20.567 00:08:20.567 real 0m4.513s 00:08:20.567 user 0m12.199s 00:08:20.567 sys 0m0.753s 00:08:20.567 18:09:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.567 18:09:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.567 ************************************ 00:08:20.567 END TEST locking_overlapped_coremask 00:08:20.567 ************************************ 00:08:20.567 18:09:54 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:20.567 18:09:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.567 18:09:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.567 18:09:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.567 ************************************ 00:08:20.567 START TEST locking_overlapped_coremask_via_rpc 00:08:20.567 ************************************ 00:08:20.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.567 18:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:20.567 18:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60559 00:08:20.567 18:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60559 /var/tmp/spdk.sock 00:08:20.567 18:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:20.567 18:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60559 ']' 00:08:20.567 18:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.567 18:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.567 18:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.567 18:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.567 18:09:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.567 [2024-11-26 18:09:54.854833] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:08:20.567 [2024-11-26 18:09:54.855286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60559 ] 00:08:20.825 [2024-11-26 18:09:55.033861] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:20.825 [2024-11-26 18:09:55.033935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:20.825 [2024-11-26 18:09:55.167088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.825 [2024-11-26 18:09:55.167232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.825 [2024-11-26 18:09:55.167245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.760 18:09:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.760 18:09:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:21.760 18:09:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60577 00:08:21.760 18:09:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:21.760 18:09:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60577 /var/tmp/spdk2.sock 00:08:21.760 18:09:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60577 ']' 00:08:21.760 18:09:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:21.760 18:09:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.760 18:09:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:21.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:21.760 18:09:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.760 18:09:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.019 [2024-11-26 18:09:56.224167] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:08:22.019 [2024-11-26 18:09:56.224631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60577 ] 00:08:22.019 [2024-11-26 18:09:56.439772] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:22.019 [2024-11-26 18:09:56.439857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:22.277 [2024-11-26 18:09:56.713534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.277 [2024-11-26 18:09:56.716710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.277 [2024-11-26 18:09:56.716721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.808 [2024-11-26 18:09:59.030800] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60559 has claimed it. 00:08:24.808 request: 00:08:24.808 { 00:08:24.808 "method": "framework_enable_cpumask_locks", 00:08:24.808 "req_id": 1 00:08:24.808 } 00:08:24.808 Got JSON-RPC error response 00:08:24.808 response: 00:08:24.808 { 00:08:24.808 "code": -32603, 00:08:24.808 "message": "Failed to claim CPU core: 2" 00:08:24.808 } 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60559 /var/tmp/spdk.sock 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60559 ']' 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.808 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.067 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.067 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:25.067 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60577 /var/tmp/spdk2.sock 00:08:25.067 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60577 ']' 00:08:25.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:25.067 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:25.067 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.067 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:25.067 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.067 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.325 ************************************ 00:08:25.325 END TEST locking_overlapped_coremask_via_rpc 00:08:25.325 ************************************ 00:08:25.325 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.325 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:25.325 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:25.325 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:25.325 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:25.325 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:25.325 00:08:25.325 real 0m4.838s 00:08:25.325 user 0m1.767s 00:08:25.325 sys 0m0.248s 00:08:25.325 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.325 18:09:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.325 18:09:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:25.326 18:09:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60559 ]] 00:08:25.326 18:09:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60559 00:08:25.326 18:09:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60559 ']' 00:08:25.326 18:09:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60559 00:08:25.326 18:09:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:25.326 18:09:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.326 18:09:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60559 00:08:25.326 killing process with pid 60559 00:08:25.326 18:09:59 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.326 18:09:59 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.326 18:09:59 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60559' 00:08:25.326 18:09:59 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60559 00:08:25.326 18:09:59 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60559 00:08:27.866 18:10:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60577 ]] 00:08:27.867 18:10:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60577 00:08:27.867 18:10:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60577 ']' 00:08:27.867 18:10:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60577 00:08:27.867 18:10:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:27.867 18:10:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.867 18:10:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60577 00:08:27.867 killing process with pid 60577 00:08:27.867 18:10:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:27.867 18:10:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:27.867 18:10:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60577' 00:08:27.867 18:10:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60577 00:08:27.867 18:10:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60577 00:08:29.768 18:10:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:29.768 Process with pid 60559 is not found 00:08:29.768 18:10:04 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:29.768 18:10:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60559 ]] 00:08:29.768 18:10:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60559 00:08:29.768 18:10:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60559 ']' 00:08:29.768 18:10:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60559 00:08:29.768 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60559) - No such process 00:08:29.768 18:10:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60559 is not found' 00:08:29.768 18:10:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60577 ]] 00:08:29.768 18:10:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60577 00:08:29.768 Process with pid 60577 is not found 00:08:29.768 18:10:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60577 ']' 00:08:29.768 18:10:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60577 00:08:29.768 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60577) - No such process 00:08:29.768 18:10:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60577 is not found' 00:08:29.768 18:10:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:29.768 00:08:29.768 real 0m52.087s 00:08:29.768 user 1m29.283s 00:08:29.768 sys 0m8.091s 00:08:29.768 18:10:04 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.768 18:10:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:29.768 ************************************ 00:08:29.768 END TEST cpu_locks 00:08:29.768 ************************************ 00:08:30.027 ************************************ 00:08:30.027 END TEST event 00:08:30.027 ************************************ 00:08:30.027 00:08:30.027 real 1m24.306s 00:08:30.027 user 2m34.183s 00:08:30.027 sys 0m12.548s 00:08:30.027 18:10:04 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.027 18:10:04 event -- common/autotest_common.sh@10 -- # set +x 00:08:30.028 18:10:04 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:30.028 18:10:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.028 18:10:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.028 18:10:04 -- common/autotest_common.sh@10 -- # set +x 00:08:30.028 ************************************ 00:08:30.028 START TEST thread 00:08:30.028 ************************************ 00:08:30.028 18:10:04 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:08:30.028 * Looking for test storage... 00:08:30.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:08:30.028 18:10:04 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:30.028 18:10:04 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:30.028 18:10:04 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:08:30.028 18:10:04 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:30.028 18:10:04 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.028 18:10:04 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.028 18:10:04 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.028 18:10:04 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.028 18:10:04 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.028 18:10:04 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.028 18:10:04 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.028 18:10:04 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.028 18:10:04 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.028 18:10:04 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.028 18:10:04 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.028 18:10:04 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:30.028 18:10:04 thread -- scripts/common.sh@345 -- # : 1 00:08:30.028 18:10:04 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.028 18:10:04 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.028 18:10:04 thread -- scripts/common.sh@365 -- # decimal 1 00:08:30.028 18:10:04 thread -- scripts/common.sh@353 -- # local d=1 00:08:30.028 18:10:04 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.028 18:10:04 thread -- scripts/common.sh@355 -- # echo 1 00:08:30.028 18:10:04 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.028 18:10:04 thread -- scripts/common.sh@366 -- # decimal 2 00:08:30.028 18:10:04 thread -- scripts/common.sh@353 -- # local d=2 00:08:30.028 18:10:04 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.028 18:10:04 thread -- scripts/common.sh@355 -- # echo 2 00:08:30.028 18:10:04 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.028 18:10:04 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.028 18:10:04 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.028 18:10:04 thread -- scripts/common.sh@368 -- # return 0 00:08:30.028 18:10:04 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.028 18:10:04 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:30.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.028 --rc genhtml_branch_coverage=1 00:08:30.028 --rc genhtml_function_coverage=1 00:08:30.028 --rc genhtml_legend=1 00:08:30.028 --rc geninfo_all_blocks=1 00:08:30.028 --rc geninfo_unexecuted_blocks=1 00:08:30.028 00:08:30.028 ' 00:08:30.028 18:10:04 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:30.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.028 --rc genhtml_branch_coverage=1 00:08:30.028 --rc genhtml_function_coverage=1 00:08:30.028 --rc genhtml_legend=1 00:08:30.028 --rc geninfo_all_blocks=1 00:08:30.028 --rc geninfo_unexecuted_blocks=1 00:08:30.028 00:08:30.028 ' 00:08:30.028 18:10:04 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:30.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.028 --rc genhtml_branch_coverage=1 00:08:30.028 --rc genhtml_function_coverage=1 00:08:30.028 --rc genhtml_legend=1 00:08:30.028 --rc geninfo_all_blocks=1 00:08:30.028 --rc geninfo_unexecuted_blocks=1 00:08:30.028 00:08:30.028 ' 00:08:30.028 18:10:04 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:30.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.028 --rc genhtml_branch_coverage=1 00:08:30.028 --rc genhtml_function_coverage=1 00:08:30.028 --rc genhtml_legend=1 00:08:30.028 --rc geninfo_all_blocks=1 00:08:30.028 --rc geninfo_unexecuted_blocks=1 00:08:30.028 00:08:30.028 ' 00:08:30.028 18:10:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:30.028 18:10:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:30.028 18:10:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.028 18:10:04 thread -- common/autotest_common.sh@10 -- # set +x 00:08:30.028 ************************************ 00:08:30.028 START TEST thread_poller_perf 00:08:30.028 ************************************ 00:08:30.028 18:10:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:30.286 [2024-11-26 18:10:04.527398] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:08:30.286 [2024-11-26 18:10:04.528544] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60772 ] 00:08:30.286 [2024-11-26 18:10:04.732207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.544 [2024-11-26 18:10:04.889036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.544 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:31.917 [2024-11-26T18:10:06.378Z] ====================================== 00:08:31.917 [2024-11-26T18:10:06.378Z] busy:2211118032 (cyc) 00:08:31.917 [2024-11-26T18:10:06.378Z] total_run_count: 289000 00:08:31.917 [2024-11-26T18:10:06.378Z] tsc_hz: 2200000000 (cyc) 00:08:31.917 [2024-11-26T18:10:06.378Z] ====================================== 00:08:31.917 [2024-11-26T18:10:06.378Z] poller_cost: 7650 (cyc), 3477 (nsec) 00:08:31.917 00:08:31.917 real 0m1.647s 00:08:31.917 user 0m1.428s 00:08:31.917 sys 0m0.107s 00:08:31.917 ************************************ 00:08:31.917 END TEST thread_poller_perf 00:08:31.917 ************************************ 00:08:31.917 18:10:06 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.917 18:10:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:31.917 18:10:06 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:31.917 18:10:06 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:31.917 18:10:06 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.918 18:10:06 thread -- common/autotest_common.sh@10 -- # set +x 00:08:31.918 ************************************ 00:08:31.918 START TEST thread_poller_perf 00:08:31.918 ************************************ 00:08:31.918 18:10:06 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:31.918 [2024-11-26 18:10:06.236595] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:08:31.918 [2024-11-26 18:10:06.236855] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60814 ] 00:08:32.175 [2024-11-26 18:10:06.450267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.175 [2024-11-26 18:10:06.583597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.175 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:33.549 [2024-11-26T18:10:08.010Z] ====================================== 00:08:33.549 [2024-11-26T18:10:08.010Z] busy:2204604270 (cyc) 00:08:33.549 [2024-11-26T18:10:08.010Z] total_run_count: 3724000 00:08:33.549 [2024-11-26T18:10:08.010Z] tsc_hz: 2200000000 (cyc) 00:08:33.549 [2024-11-26T18:10:08.010Z] ====================================== 00:08:33.549 [2024-11-26T18:10:08.010Z] poller_cost: 591 (cyc), 268 (nsec) 00:08:33.549 00:08:33.549 real 0m1.642s 00:08:33.549 user 0m1.402s 00:08:33.549 sys 0m0.128s 00:08:33.549 ************************************ 00:08:33.549 END TEST thread_poller_perf 00:08:33.549 ************************************ 00:08:33.549 18:10:07 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.549 18:10:07 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:33.549 18:10:07 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:33.549 ************************************ 00:08:33.549 END TEST thread 00:08:33.549 ************************************ 00:08:33.549 00:08:33.549 real 0m3.559s 00:08:33.549 user 0m2.953s 00:08:33.549 sys 0m0.380s 00:08:33.549 18:10:07 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.549 18:10:07 thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.549 18:10:07 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:33.549 18:10:07 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:33.549 18:10:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.549 18:10:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.549 18:10:07 -- common/autotest_common.sh@10 -- # set +x 00:08:33.549 ************************************ 00:08:33.549 START TEST app_cmdline 00:08:33.549 ************************************ 00:08:33.549 18:10:07 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:33.549 * Looking for test storage... 00:08:33.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:33.549 18:10:07 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:33.549 18:10:07 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:08:33.549 18:10:07 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:33.807 18:10:08 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:33.807 18:10:08 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.807 18:10:08 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.807 18:10:08 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.807 18:10:08 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.807 18:10:08 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.807 18:10:08 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.807 18:10:08 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.807 18:10:08 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.807 18:10:08 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.807 18:10:08 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.807 18:10:08 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.807 18:10:08 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:33.808 18:10:08 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:33.808 18:10:08 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.808 18:10:08 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.808 18:10:08 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:33.808 18:10:08 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:33.808 18:10:08 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.808 18:10:08 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:33.808 18:10:08 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.808 18:10:08 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:33.808 18:10:08 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:33.808 18:10:08 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.808 18:10:08 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:33.808 18:10:08 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.808 18:10:08 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.808 18:10:08 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.808 18:10:08 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:33.808 18:10:08 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.808 18:10:08 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:33.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.808 --rc genhtml_branch_coverage=1 00:08:33.808 --rc genhtml_function_coverage=1 00:08:33.808 --rc genhtml_legend=1 00:08:33.808 --rc geninfo_all_blocks=1 00:08:33.808 --rc geninfo_unexecuted_blocks=1 00:08:33.808 00:08:33.808 ' 00:08:33.808 18:10:08 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:33.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.808 --rc genhtml_branch_coverage=1 00:08:33.808 --rc genhtml_function_coverage=1 00:08:33.808 --rc genhtml_legend=1 00:08:33.808 --rc geninfo_all_blocks=1 00:08:33.808 --rc geninfo_unexecuted_blocks=1 00:08:33.808 00:08:33.808 ' 00:08:33.808 18:10:08 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:33.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.808 --rc genhtml_branch_coverage=1 00:08:33.808 --rc genhtml_function_coverage=1 00:08:33.808 --rc genhtml_legend=1 00:08:33.808 --rc geninfo_all_blocks=1 00:08:33.808 --rc geninfo_unexecuted_blocks=1 00:08:33.808 00:08:33.808 ' 00:08:33.808 18:10:08 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:33.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.808 --rc genhtml_branch_coverage=1 00:08:33.808 --rc genhtml_function_coverage=1 00:08:33.808 --rc genhtml_legend=1 00:08:33.808 --rc geninfo_all_blocks=1 00:08:33.808 --rc geninfo_unexecuted_blocks=1 00:08:33.808 00:08:33.808 ' 00:08:33.808 18:10:08 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:33.808 18:10:08 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60898 00:08:33.808 18:10:08 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60898 00:08:33.808 18:10:08 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:33.808 18:10:08 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60898 ']' 00:08:33.808 18:10:08 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.808 18:10:08 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.808 18:10:08 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.808 18:10:08 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.808 18:10:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:33.808 [2024-11-26 18:10:08.220800] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:08:33.808 [2024-11-26 18:10:08.221644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60898 ] 00:08:34.066 [2024-11-26 18:10:08.405411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.324 [2024-11-26 18:10:08.567078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.255 18:10:09 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.256 18:10:09 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:35.256 18:10:09 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:35.515 { 00:08:35.515 "version": "SPDK v25.01-pre git sha1 51a65534e", 00:08:35.515 "fields": { 00:08:35.515 "major": 25, 00:08:35.515 "minor": 1, 00:08:35.515 "patch": 0, 00:08:35.515 "suffix": "-pre", 00:08:35.515 "commit": "51a65534e" 00:08:35.515 } 00:08:35.515 } 00:08:35.515 18:10:09 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:35.515 18:10:09 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:35.515 18:10:09 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:35.515 18:10:09 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:35.515 18:10:09 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:35.515 18:10:09 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.515 18:10:09 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:35.515 18:10:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:35.515 18:10:09 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:35.515 18:10:09 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.515 18:10:09 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:35.515 18:10:09 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:35.515 18:10:09 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:35.515 18:10:09 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:35.515 18:10:09 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:35.515 18:10:09 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.515 18:10:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.515 18:10:09 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.515 18:10:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.515 18:10:09 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.515 18:10:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.515 18:10:09 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:35.515 18:10:09 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:35.515 18:10:09 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:35.773 request: 00:08:35.773 { 00:08:35.773 "method": "env_dpdk_get_mem_stats", 00:08:35.773 "req_id": 1 00:08:35.773 } 00:08:35.773 Got JSON-RPC error response 00:08:35.773 response: 00:08:35.773 { 00:08:35.773 "code": -32601, 00:08:35.773 "message": "Method not found" 00:08:35.773 } 00:08:35.773 18:10:10 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:35.773 18:10:10 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.773 18:10:10 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:35.773 18:10:10 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.773 18:10:10 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60898 00:08:35.773 18:10:10 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60898 ']' 00:08:35.773 18:10:10 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60898 00:08:35.773 18:10:10 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:35.773 18:10:10 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.773 18:10:10 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60898 00:08:35.773 killing process with pid 60898 00:08:35.773 18:10:10 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.773 18:10:10 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.773 18:10:10 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60898' 00:08:35.773 18:10:10 app_cmdline -- common/autotest_common.sh@973 -- # kill 60898 00:08:35.773 18:10:10 app_cmdline -- common/autotest_common.sh@978 -- # wait 60898 00:08:38.302 00:08:38.302 real 0m4.501s 00:08:38.302 user 0m4.909s 00:08:38.302 sys 0m0.683s 00:08:38.302 ************************************ 00:08:38.302 END TEST app_cmdline 00:08:38.302 ************************************ 00:08:38.302 18:10:12 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.302 18:10:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:38.302 18:10:12 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:38.302 18:10:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.302 18:10:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.302 18:10:12 -- common/autotest_common.sh@10 -- # set +x 00:08:38.302 ************************************ 00:08:38.302 START TEST version 00:08:38.302 ************************************ 00:08:38.302 18:10:12 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:38.302 * Looking for test storage... 00:08:38.302 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:38.302 18:10:12 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.302 18:10:12 version -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.302 18:10:12 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.302 18:10:12 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.302 18:10:12 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.302 18:10:12 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.302 18:10:12 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.302 18:10:12 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.302 18:10:12 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.302 18:10:12 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.302 18:10:12 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.302 18:10:12 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.302 18:10:12 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.302 18:10:12 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.302 18:10:12 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.302 18:10:12 version -- scripts/common.sh@344 -- # case "$op" in 00:08:38.302 18:10:12 version -- scripts/common.sh@345 -- # : 1 00:08:38.302 18:10:12 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.302 18:10:12 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.302 18:10:12 version -- scripts/common.sh@365 -- # decimal 1 00:08:38.302 18:10:12 version -- scripts/common.sh@353 -- # local d=1 00:08:38.302 18:10:12 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.302 18:10:12 version -- scripts/common.sh@355 -- # echo 1 00:08:38.302 18:10:12 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.302 18:10:12 version -- scripts/common.sh@366 -- # decimal 2 00:08:38.302 18:10:12 version -- scripts/common.sh@353 -- # local d=2 00:08:38.302 18:10:12 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.302 18:10:12 version -- scripts/common.sh@355 -- # echo 2 00:08:38.302 18:10:12 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.302 18:10:12 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.302 18:10:12 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.302 18:10:12 version -- scripts/common.sh@368 -- # return 0 00:08:38.302 18:10:12 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.302 18:10:12 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.302 --rc genhtml_branch_coverage=1 00:08:38.302 --rc genhtml_function_coverage=1 00:08:38.302 --rc genhtml_legend=1 00:08:38.302 --rc geninfo_all_blocks=1 00:08:38.302 --rc geninfo_unexecuted_blocks=1 00:08:38.302 00:08:38.302 ' 00:08:38.302 18:10:12 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.302 --rc genhtml_branch_coverage=1 00:08:38.302 --rc genhtml_function_coverage=1 00:08:38.302 --rc genhtml_legend=1 00:08:38.302 --rc geninfo_all_blocks=1 00:08:38.302 --rc geninfo_unexecuted_blocks=1 00:08:38.302 00:08:38.302 ' 00:08:38.302 18:10:12 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.302 --rc genhtml_branch_coverage=1 00:08:38.302 --rc genhtml_function_coverage=1 00:08:38.302 --rc genhtml_legend=1 00:08:38.302 --rc geninfo_all_blocks=1 00:08:38.302 --rc geninfo_unexecuted_blocks=1 00:08:38.302 00:08:38.302 ' 00:08:38.302 18:10:12 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.302 --rc genhtml_branch_coverage=1 00:08:38.302 --rc genhtml_function_coverage=1 00:08:38.302 --rc genhtml_legend=1 00:08:38.302 --rc geninfo_all_blocks=1 00:08:38.302 --rc geninfo_unexecuted_blocks=1 00:08:38.302 00:08:38.302 ' 00:08:38.302 18:10:12 version -- app/version.sh@17 -- # get_header_version major 00:08:38.302 18:10:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:38.302 18:10:12 version -- app/version.sh@14 -- # cut -f2 00:08:38.302 18:10:12 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.302 18:10:12 version -- app/version.sh@17 -- # major=25 00:08:38.302 18:10:12 version -- app/version.sh@18 -- # get_header_version minor 00:08:38.302 18:10:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:38.302 18:10:12 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.302 18:10:12 version -- app/version.sh@14 -- # cut -f2 00:08:38.302 18:10:12 version -- app/version.sh@18 -- # minor=1 00:08:38.302 18:10:12 version -- app/version.sh@19 -- # get_header_version patch 00:08:38.302 18:10:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:38.302 18:10:12 version -- app/version.sh@14 -- # cut -f2 00:08:38.302 18:10:12 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.302 18:10:12 version -- app/version.sh@19 -- # patch=0 00:08:38.302 18:10:12 version -- app/version.sh@20 -- # get_header_version suffix 00:08:38.302 18:10:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:38.302 18:10:12 version -- app/version.sh@14 -- # cut -f2 00:08:38.302 18:10:12 version -- app/version.sh@14 -- # tr -d '"' 00:08:38.302 18:10:12 version -- app/version.sh@20 -- # suffix=-pre 00:08:38.302 18:10:12 version -- app/version.sh@22 -- # version=25.1 00:08:38.302 18:10:12 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:38.302 18:10:12 version -- app/version.sh@28 -- # version=25.1rc0 00:08:38.302 18:10:12 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:38.302 18:10:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:38.302 18:10:12 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:38.302 18:10:12 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:38.302 00:08:38.302 real 0m0.261s 00:08:38.302 user 0m0.163s 00:08:38.302 sys 0m0.134s 00:08:38.302 ************************************ 00:08:38.302 END TEST version 00:08:38.302 ************************************ 00:08:38.302 18:10:12 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.302 18:10:12 version -- common/autotest_common.sh@10 -- # set +x 00:08:38.560 18:10:12 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:38.560 18:10:12 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:38.560 18:10:12 -- spdk/autotest.sh@194 -- # uname -s 00:08:38.560 18:10:12 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:38.560 18:10:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:38.560 18:10:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:38.560 18:10:12 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:08:38.560 18:10:12 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:38.560 18:10:12 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.560 18:10:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.560 18:10:12 -- common/autotest_common.sh@10 -- # set +x 00:08:38.560 ************************************ 00:08:38.560 START TEST blockdev_nvme 00:08:38.560 ************************************ 00:08:38.560 18:10:12 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:38.560 * Looking for test storage... 00:08:38.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:38.560 18:10:12 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.560 18:10:12 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.560 18:10:12 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.560 18:10:13 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.560 18:10:13 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:08:38.819 18:10:13 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:08:38.819 18:10:13 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.819 18:10:13 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:08:38.819 18:10:13 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.819 18:10:13 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.819 18:10:13 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.819 18:10:13 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:08:38.819 18:10:13 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.819 18:10:13 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.819 --rc genhtml_branch_coverage=1 00:08:38.819 --rc genhtml_function_coverage=1 00:08:38.819 --rc genhtml_legend=1 00:08:38.819 --rc geninfo_all_blocks=1 00:08:38.819 --rc geninfo_unexecuted_blocks=1 00:08:38.819 00:08:38.819 ' 00:08:38.819 18:10:13 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.819 --rc genhtml_branch_coverage=1 00:08:38.819 --rc genhtml_function_coverage=1 00:08:38.819 --rc genhtml_legend=1 00:08:38.819 --rc geninfo_all_blocks=1 00:08:38.819 --rc geninfo_unexecuted_blocks=1 00:08:38.819 00:08:38.819 ' 00:08:38.819 18:10:13 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.819 --rc genhtml_branch_coverage=1 00:08:38.819 --rc genhtml_function_coverage=1 00:08:38.819 --rc genhtml_legend=1 00:08:38.819 --rc geninfo_all_blocks=1 00:08:38.819 --rc geninfo_unexecuted_blocks=1 00:08:38.819 00:08:38.819 ' 00:08:38.819 18:10:13 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.819 --rc genhtml_branch_coverage=1 00:08:38.819 --rc genhtml_function_coverage=1 00:08:38.819 --rc genhtml_legend=1 00:08:38.819 --rc geninfo_all_blocks=1 00:08:38.819 --rc geninfo_unexecuted_blocks=1 00:08:38.819 00:08:38.819 ' 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:38.819 18:10:13 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:08:38.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61088 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:38.819 18:10:13 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61088 00:08:38.819 18:10:13 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61088 ']' 00:08:38.819 18:10:13 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.819 18:10:13 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.819 18:10:13 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.819 18:10:13 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.819 18:10:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.819 [2024-11-26 18:10:13.174444] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:08:38.819 [2024-11-26 18:10:13.175220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61088 ] 00:08:39.077 [2024-11-26 18:10:13.371675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.077 [2024-11-26 18:10:13.535967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.028 18:10:14 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.028 18:10:14 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:08:40.028 18:10:14 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:08:40.028 18:10:14 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:08:40.028 18:10:14 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:40.028 18:10:14 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:40.028 18:10:14 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:40.286 18:10:14 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:40.286 18:10:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.286 18:10:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.578 18:10:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.578 18:10:14 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:08:40.578 18:10:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.578 18:10:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.578 18:10:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.578 18:10:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:08:40.578 18:10:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:08:40.578 18:10:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.578 18:10:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.578 18:10:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.578 18:10:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:08:40.578 18:10:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.578 18:10:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.578 18:10:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.578 18:10:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:40.578 18:10:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.578 18:10:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.578 18:10:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.578 18:10:14 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:08:40.578 18:10:14 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:08:40.578 18:10:14 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:08:40.578 18:10:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.578 18:10:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.578 18:10:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.578 18:10:14 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:08:40.578 18:10:14 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:08:40.579 18:10:14 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "85653b72-1ec7-4376-8e65-71ede11269d4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "85653b72-1ec7-4376-8e65-71ede11269d4",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "e743fd98-1bcd-4a2b-a55c-80b83f81dded"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e743fd98-1bcd-4a2b-a55c-80b83f81dded",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "4c9682e4-e635-4bd5-b52d-3ec25cd4a53a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4c9682e4-e635-4bd5-b52d-3ec25cd4a53a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8c168592-ec3e-44b1-b8f3-c98a6d1e578c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8c168592-ec3e-44b1-b8f3-c98a6d1e578c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "4c181018-cace-4747-b351-51816d3e582b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4c181018-cace-4747-b351-51816d3e582b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "78f3b86c-ad95-45b2-adf9-1752d28e120a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "78f3b86c-ad95-45b2-adf9-1752d28e120a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:40.579 18:10:15 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:08:40.579 18:10:15 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:08:40.579 18:10:15 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:08:40.579 18:10:15 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61088 00:08:40.579 18:10:15 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61088 ']' 00:08:40.579 18:10:15 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61088 00:08:40.579 18:10:15 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:08:40.579 18:10:15 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.579 18:10:15 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61088 00:08:40.837 killing process with pid 61088 00:08:40.837 18:10:15 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.837 18:10:15 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.837 18:10:15 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61088' 00:08:40.837 18:10:15 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61088 00:08:40.837 18:10:15 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61088 00:08:43.367 18:10:17 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:43.367 18:10:17 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:43.367 18:10:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:43.367 18:10:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.367 18:10:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.367 ************************************ 00:08:43.367 START TEST bdev_hello_world 00:08:43.367 ************************************ 00:08:43.367 18:10:17 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:43.367 [2024-11-26 18:10:17.367503] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:08:43.367 [2024-11-26 18:10:17.367739] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61189 ] 00:08:43.367 [2024-11-26 18:10:17.556721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.367 [2024-11-26 18:10:17.690005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.933 [2024-11-26 18:10:18.362037] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:43.933 [2024-11-26 18:10:18.362119] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:43.933 [2024-11-26 18:10:18.362165] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:43.933 [2024-11-26 18:10:18.365432] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:43.933 [2024-11-26 18:10:18.365941] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:43.933 [2024-11-26 18:10:18.365982] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:43.933 [2024-11-26 18:10:18.366207] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:43.933 00:08:43.933 [2024-11-26 18:10:18.366239] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:45.307 00:08:45.307 real 0m2.185s 00:08:45.307 user 0m1.768s 00:08:45.307 sys 0m0.305s 00:08:45.307 18:10:19 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.307 18:10:19 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:45.307 ************************************ 00:08:45.307 END TEST bdev_hello_world 00:08:45.307 ************************************ 00:08:45.307 18:10:19 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:08:45.307 18:10:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:45.307 18:10:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.307 18:10:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.307 ************************************ 00:08:45.307 START TEST bdev_bounds 00:08:45.307 ************************************ 00:08:45.307 18:10:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:45.307 18:10:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61231 00:08:45.307 Process bdevio pid: 61231 00:08:45.307 18:10:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:45.307 18:10:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61231' 00:08:45.307 18:10:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61231 00:08:45.307 18:10:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:45.307 18:10:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61231 ']' 00:08:45.307 18:10:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.307 18:10:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.307 18:10:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.307 18:10:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.307 18:10:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:45.307 [2024-11-26 18:10:19.615975] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:08:45.307 [2024-11-26 18:10:19.616233] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61231 ] 00:08:45.565 [2024-11-26 18:10:19.804426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:45.565 [2024-11-26 18:10:19.946791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.565 [2024-11-26 18:10:19.946918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.565 [2024-11-26 18:10:19.946934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.497 18:10:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.497 18:10:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:46.497 18:10:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:46.497 I/O targets: 00:08:46.497 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:46.497 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:46.497 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:46.497 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:46.497 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:46.497 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:46.497 00:08:46.497 00:08:46.497 CUnit - A unit testing framework for C - Version 2.1-3 00:08:46.497 http://cunit.sourceforge.net/ 00:08:46.497 00:08:46.497 00:08:46.497 Suite: bdevio tests on: Nvme3n1 00:08:46.497 Test: blockdev write read block ...passed 00:08:46.497 Test: blockdev write zeroes read block ...passed 00:08:46.497 Test: blockdev write zeroes read no split ...passed 00:08:46.497 Test: blockdev write zeroes read split ...passed 00:08:46.498 Test: blockdev write zeroes read split partial ...passed 00:08:46.498 Test: blockdev reset ...[2024-11-26 18:10:20.854218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:46.498 passed 00:08:46.498 Test: blockdev write read 8 blocks ...[2024-11-26 18:10:20.858066] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:08:46.498 passed 00:08:46.498 Test: blockdev write read size > 128k ...passed 00:08:46.498 Test: blockdev write read invalid size ...passed 00:08:46.498 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.498 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.498 Test: blockdev write read max offset ...passed 00:08:46.498 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.498 Test: blockdev writev readv 8 blocks ...passed 00:08:46.498 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.498 Test: blockdev writev readv block ...passed 00:08:46.498 Test: blockdev writev readv size > 128k ...passed 00:08:46.498 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.498 Test: blockdev comparev and writev ...[2024-11-26 18:10:20.866624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c500a000 len:0x1000 00:08:46.498 [2024-11-26 18:10:20.866687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.498 passed 00:08:46.498 Test: blockdev nvme passthru rw ...passed 00:08:46.498 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:10:20.867583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:08:46.498 Test: blockdev nvme admin passthru ...RP2 0x0 00:08:46.498 [2024-11-26 18:10:20.867784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.498 passed 00:08:46.498 Test: blockdev copy ...passed 00:08:46.498 Suite: bdevio tests on: Nvme2n3 00:08:46.498 Test: blockdev write read block ...passed 00:08:46.498 Test: blockdev write zeroes read block ...passed 00:08:46.498 Test: blockdev write zeroes read no split ...passed 00:08:46.498 Test: blockdev write zeroes read split ...passed 00:08:46.498 Test: blockdev write zeroes read split partial ...passed 00:08:46.498 Test: blockdev reset ...[2024-11-26 18:10:20.943279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:46.498 [2024-11-26 18:10:20.947544] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:46.498 passed 00:08:46.498 Test: blockdev write read 8 blocks ...passed 00:08:46.498 Test: blockdev write read size > 128k ...passed 00:08:46.498 Test: blockdev write read invalid size ...passed 00:08:46.498 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.498 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.498 Test: blockdev write read max offset ...passed 00:08:46.498 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.498 Test: blockdev writev readv 8 blocks ...passed 00:08:46.498 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.498 Test: blockdev writev readv block ...passed 00:08:46.498 Test: blockdev writev readv size > 128k ...passed 00:08:46.498 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.756 Test: blockdev comparev and writev ...[2024-11-26 18:10:20.956615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2a8206000 len:0x1000 00:08:46.756 [2024-11-26 18:10:20.956677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.756 passed 00:08:46.756 Test: blockdev nvme passthru rw ...passed 00:08:46.756 Test: blockdev nvme passthru vendor specific ...passed 00:08:46.756 Test: blockdev nvme admin passthru ...[2024-11-26 18:10:20.957463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.756 [2024-11-26 18:10:20.957506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.756 passed 00:08:46.756 Test: blockdev copy ...passed 00:08:46.756 Suite: bdevio tests on: Nvme2n2 00:08:46.756 Test: blockdev write read block ...passed 00:08:46.756 Test: blockdev write zeroes read block ...passed 00:08:46.756 Test: blockdev write zeroes read no split ...passed 00:08:46.756 Test: blockdev write zeroes read split ...passed 00:08:46.756 Test: blockdev write zeroes read split partial ...passed 00:08:46.756 Test: blockdev reset ...[2024-11-26 18:10:21.031545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:46.756 passed 00:08:46.756 Test: blockdev write read 8 blocks ...[2024-11-26 18:10:21.036104] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:46.756 passed 00:08:46.756 Test: blockdev write read size > 128k ...passed 00:08:46.756 Test: blockdev write read invalid size ...passed 00:08:46.756 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.756 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.756 Test: blockdev write read max offset ...passed 00:08:46.756 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.756 Test: blockdev writev readv 8 blocks ...passed 00:08:46.756 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.756 Test: blockdev writev readv block ...passed 00:08:46.756 Test: blockdev writev readv size > 128k ...passed 00:08:46.756 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.756 Test: blockdev comparev and writev ...[2024-11-26 18:10:21.044783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d503c000 len:0x1000 00:08:46.756 [2024-11-26 18:10:21.044847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.756 passed 00:08:46.756 Test: blockdev nvme passthru rw ...passed 00:08:46.756 Test: blockdev nvme passthru vendor specific ...passed 00:08:46.756 Test: blockdev nvme admin passthru ...[2024-11-26 18:10:21.045744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.756 [2024-11-26 18:10:21.045793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.756 passed 00:08:46.756 Test: blockdev copy ...passed 00:08:46.756 Suite: bdevio tests on: Nvme2n1 00:08:46.756 Test: blockdev write read block ...passed 00:08:46.756 Test: blockdev write zeroes read block ...passed 00:08:46.756 Test: blockdev write zeroes read no split ...passed 00:08:46.756 Test: blockdev write zeroes read split ...passed 00:08:46.756 Test: blockdev write zeroes read split partial ...passed 00:08:46.756 Test: blockdev reset ...[2024-11-26 18:10:21.120189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:46.756 [2024-11-26 18:10:21.124808] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:46.756 passed 00:08:46.756 Test: blockdev write read 8 blocks ...passed 00:08:46.756 Test: blockdev write read size > 128k ...passed 00:08:46.756 Test: blockdev write read invalid size ...passed 00:08:46.756 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.756 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.756 Test: blockdev write read max offset ...passed 00:08:46.756 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.756 Test: blockdev writev readv 8 blocks ...passed 00:08:46.756 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.756 Test: blockdev writev readv block ...passed 00:08:46.756 Test: blockdev writev readv size > 128k ...passed 00:08:46.756 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.756 Test: blockdev comparev and writev ...[2024-11-26 18:10:21.133712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5038000 len:0x1000 00:08:46.756 [2024-11-26 18:10:21.133778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.756 passed 00:08:46.756 Test: blockdev nvme passthru rw ...passed 00:08:46.756 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:10:21.134548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.756 [2024-11-26 18:10:21.134607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.757 passed 00:08:46.757 Test: blockdev nvme admin passthru ...passed 00:08:46.757 Test: blockdev copy ...passed 00:08:46.757 Suite: bdevio tests on: Nvme1n1 00:08:46.757 Test: blockdev write read block ...passed 00:08:46.757 Test: blockdev write zeroes read block ...passed 00:08:46.757 Test: blockdev write zeroes read no split ...passed 00:08:46.757 Test: blockdev write zeroes read split ...passed 00:08:46.757 Test: blockdev write zeroes read split partial ...passed 00:08:46.757 Test: blockdev reset ...[2024-11-26 18:10:21.207042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:46.757 [2024-11-26 18:10:21.210758] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:46.757 passed 00:08:46.757 Test: blockdev write read 8 blocks ...passed 00:08:46.757 Test: blockdev write read size > 128k ...passed 00:08:46.757 Test: blockdev write read invalid size ...passed 00:08:46.757 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.757 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.757 Test: blockdev write read max offset ...passed 00:08:46.757 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.015 Test: blockdev writev readv 8 blocks ...passed 00:08:47.015 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.015 Test: blockdev writev readv block ...passed 00:08:47.015 Test: blockdev writev readv size > 128k ...passed 00:08:47.015 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.015 Test: blockdev comparev and writev ...[2024-11-26 18:10:21.219418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5034000 len:0x1000 00:08:47.015 [2024-11-26 18:10:21.219482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:47.015 passed 00:08:47.015 Test: blockdev nvme passthru rw ...passed 00:08:47.015 Test: blockdev nvme passthru vendor specific ...passed 00:08:47.015 Test: blockdev nvme admin passthru ...[2024-11-26 18:10:21.220303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:47.015 [2024-11-26 18:10:21.220352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:47.015 passed 00:08:47.015 Test: blockdev copy ...passed 00:08:47.015 Suite: bdevio tests on: Nvme0n1 00:08:47.015 Test: blockdev write read block ...passed 00:08:47.015 Test: blockdev write zeroes read block ...passed 00:08:47.015 Test: blockdev write zeroes read no split ...passed 00:08:47.015 Test: blockdev write zeroes read split ...passed 00:08:47.015 Test: blockdev write zeroes read split partial ...passed 00:08:47.015 Test: blockdev reset ...[2024-11-26 18:10:21.286384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:47.015 passed 00:08:47.015 Test: blockdev write read 8 blocks ...[2024-11-26 18:10:21.290341] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:47.015 passed 00:08:47.015 Test: blockdev write read size > 128k ...passed 00:08:47.015 Test: blockdev write read invalid size ...passed 00:08:47.015 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:47.015 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:47.015 Test: blockdev write read max offset ...passed 00:08:47.015 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:47.015 Test: blockdev writev readv 8 blocks ...passed 00:08:47.015 Test: blockdev writev readv 30 x 1block ...passed 00:08:47.015 Test: blockdev writev readv block ...passed 00:08:47.015 Test: blockdev writev readv size > 128k ...passed 00:08:47.015 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:47.015 Test: blockdev comparev and writev ...passed 00:08:47.015 Test: blockdev nvme passthru rw ...[2024-11-26 18:10:21.297280] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:47.015 separate metadata which is not supported yet. 00:08:47.015 passed 00:08:47.015 Test: blockdev nvme passthru vendor specific ...passed 00:08:47.015 Test: blockdev nvme admin passthru ...[2024-11-26 18:10:21.297877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:47.015 [2024-11-26 18:10:21.297940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:47.015 passed 00:08:47.015 Test: blockdev copy ...passed 00:08:47.015 00:08:47.015 Run Summary: Type Total Ran Passed Failed Inactive 00:08:47.015 suites 6 6 n/a 0 0 00:08:47.015 tests 138 138 138 0 0 00:08:47.015 asserts 893 893 893 0 n/a 00:08:47.015 00:08:47.015 Elapsed time = 1.424 seconds 00:08:47.015 0 00:08:47.015 18:10:21 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61231 00:08:47.015 18:10:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61231 ']' 00:08:47.015 18:10:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61231 00:08:47.015 18:10:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:47.015 18:10:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.015 18:10:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61231 00:08:47.015 18:10:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.015 18:10:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.015 18:10:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61231' 00:08:47.015 killing process with pid 61231 00:08:47.016 18:10:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61231 00:08:47.016 18:10:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61231 00:08:47.950 18:10:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:47.950 00:08:47.950 real 0m2.871s 00:08:47.950 user 0m7.334s 00:08:47.950 sys 0m0.453s 00:08:47.950 18:10:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.950 18:10:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:47.950 ************************************ 00:08:47.950 END TEST bdev_bounds 00:08:47.950 ************************************ 00:08:48.208 18:10:22 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:48.208 18:10:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:48.208 18:10:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.208 18:10:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.208 ************************************ 00:08:48.208 START TEST bdev_nbd 00:08:48.208 ************************************ 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61296 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61296 /var/tmp/spdk-nbd.sock 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61296 ']' 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.208 18:10:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:48.208 [2024-11-26 18:10:22.530190] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:08:48.208 [2024-11-26 18:10:22.530352] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.466 [2024-11-26 18:10:22.704134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.466 [2024-11-26 18:10:22.845589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.401 18:10:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.401 18:10:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:49.401 18:10:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:49.401 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.401 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:49.401 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:49.401 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:49.401 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.401 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:49.401 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:49.401 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:49.401 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:49.401 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:49.401 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:49.401 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.659 1+0 records in 00:08:49.659 1+0 records out 00:08:49.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728029 s, 5.6 MB/s 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:49.659 18:10:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.917 1+0 records in 00:08:49.917 1+0 records out 00:08:49.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569323 s, 7.2 MB/s 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:49.917 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.176 1+0 records in 00:08:50.176 1+0 records out 00:08:50.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653325 s, 6.3 MB/s 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:50.176 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.434 1+0 records in 00:08:50.434 1+0 records out 00:08:50.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613781 s, 6.7 MB/s 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:50.434 18:10:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:51.000 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:51.000 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:51.001 1+0 records in 00:08:51.001 1+0 records out 00:08:51.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736213 s, 5.6 MB/s 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:51.001 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:51.258 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:51.258 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:51.258 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:51.258 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:51.258 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:51.259 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:51.259 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:51.259 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:51.259 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:51.259 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:51.259 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:51.259 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:51.259 1+0 records in 00:08:51.259 1+0 records out 00:08:51.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000945513 s, 4.3 MB/s 00:08:51.259 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.259 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:51.259 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:51.259 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:51.259 18:10:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:51.259 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:51.259 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:51.259 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:51.517 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:51.517 { 00:08:51.517 "nbd_device": "/dev/nbd0", 00:08:51.517 "bdev_name": "Nvme0n1" 00:08:51.517 }, 00:08:51.517 { 00:08:51.517 "nbd_device": "/dev/nbd1", 00:08:51.517 "bdev_name": "Nvme1n1" 00:08:51.517 }, 00:08:51.517 { 00:08:51.517 "nbd_device": "/dev/nbd2", 00:08:51.517 "bdev_name": "Nvme2n1" 00:08:51.517 }, 00:08:51.517 { 00:08:51.517 "nbd_device": "/dev/nbd3", 00:08:51.517 "bdev_name": "Nvme2n2" 00:08:51.517 }, 00:08:51.517 { 00:08:51.517 "nbd_device": "/dev/nbd4", 00:08:51.517 "bdev_name": "Nvme2n3" 00:08:51.517 }, 00:08:51.517 { 00:08:51.517 "nbd_device": "/dev/nbd5", 00:08:51.517 "bdev_name": "Nvme3n1" 00:08:51.517 } 00:08:51.517 ]' 00:08:51.517 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:51.517 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:51.517 { 00:08:51.517 "nbd_device": "/dev/nbd0", 00:08:51.517 "bdev_name": "Nvme0n1" 00:08:51.517 }, 00:08:51.517 { 00:08:51.517 "nbd_device": "/dev/nbd1", 00:08:51.517 "bdev_name": "Nvme1n1" 00:08:51.517 }, 00:08:51.517 { 00:08:51.517 "nbd_device": "/dev/nbd2", 00:08:51.517 "bdev_name": "Nvme2n1" 00:08:51.517 }, 00:08:51.517 { 00:08:51.517 "nbd_device": "/dev/nbd3", 00:08:51.517 "bdev_name": "Nvme2n2" 00:08:51.517 }, 00:08:51.517 { 00:08:51.517 "nbd_device": "/dev/nbd4", 00:08:51.517 "bdev_name": "Nvme2n3" 00:08:51.517 }, 00:08:51.517 { 00:08:51.517 "nbd_device": "/dev/nbd5", 00:08:51.517 "bdev_name": "Nvme3n1" 00:08:51.517 } 00:08:51.517 ]' 00:08:51.517 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:51.517 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:51.517 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:51.517 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:51.517 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:51.517 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:51.517 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.517 18:10:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:51.775 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:51.775 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:51.775 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:51.775 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.775 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.775 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:51.775 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.775 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.775 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.775 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:52.034 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:52.034 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:52.034 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:52.034 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.034 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.034 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:52.034 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.034 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.034 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.034 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:52.320 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:52.320 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:52.320 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:52.320 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.320 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.320 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:52.320 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.320 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.320 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.320 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:52.589 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:52.589 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:52.589 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:52.589 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.589 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.589 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:52.589 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.589 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.589 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.589 18:10:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:52.847 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:52.847 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:52.847 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:52.847 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.847 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.847 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:52.847 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.847 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.847 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.847 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:53.412 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:53.412 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:53.412 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:53.412 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:53.412 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:53.412 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:53.412 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:53.412 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:53.412 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:53.412 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.412 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:53.412 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:53.412 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:53.412 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:53.671 18:10:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:53.930 /dev/nbd0 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:53.930 1+0 records in 00:08:53.930 1+0 records out 00:08:53.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059936 s, 6.8 MB/s 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:53.930 18:10:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:54.188 /dev/nbd1 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:54.188 1+0 records in 00:08:54.188 1+0 records out 00:08:54.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536814 s, 7.6 MB/s 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:54.188 18:10:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:54.446 /dev/nbd10 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:54.446 1+0 records in 00:08:54.446 1+0 records out 00:08:54.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555825 s, 7.4 MB/s 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:54.446 18:10:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:54.705 /dev/nbd11 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:54.705 1+0 records in 00:08:54.705 1+0 records out 00:08:54.705 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563698 s, 7.3 MB/s 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:54.705 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:54.962 /dev/nbd12 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:54.962 1+0 records in 00:08:54.962 1+0 records out 00:08:54.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000614979 s, 6.7 MB/s 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:54.962 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:55.529 /dev/nbd13 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:55.529 1+0 records in 00:08:55.529 1+0 records out 00:08:55.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636318 s, 6.4 MB/s 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:55.529 { 00:08:55.529 "nbd_device": "/dev/nbd0", 00:08:55.529 "bdev_name": "Nvme0n1" 00:08:55.529 }, 00:08:55.529 { 00:08:55.529 "nbd_device": "/dev/nbd1", 00:08:55.529 "bdev_name": "Nvme1n1" 00:08:55.529 }, 00:08:55.529 { 00:08:55.529 "nbd_device": "/dev/nbd10", 00:08:55.529 "bdev_name": "Nvme2n1" 00:08:55.529 }, 00:08:55.529 { 00:08:55.529 "nbd_device": "/dev/nbd11", 00:08:55.529 "bdev_name": "Nvme2n2" 00:08:55.529 }, 00:08:55.529 { 00:08:55.529 "nbd_device": "/dev/nbd12", 00:08:55.529 "bdev_name": "Nvme2n3" 00:08:55.529 }, 00:08:55.529 { 00:08:55.529 "nbd_device": "/dev/nbd13", 00:08:55.529 "bdev_name": "Nvme3n1" 00:08:55.529 } 00:08:55.529 ]' 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:55.529 { 00:08:55.529 "nbd_device": "/dev/nbd0", 00:08:55.529 "bdev_name": "Nvme0n1" 00:08:55.529 }, 00:08:55.529 { 00:08:55.529 "nbd_device": "/dev/nbd1", 00:08:55.529 "bdev_name": "Nvme1n1" 00:08:55.529 }, 00:08:55.529 { 00:08:55.529 "nbd_device": "/dev/nbd10", 00:08:55.529 "bdev_name": "Nvme2n1" 00:08:55.529 }, 00:08:55.529 { 00:08:55.529 "nbd_device": "/dev/nbd11", 00:08:55.529 "bdev_name": "Nvme2n2" 00:08:55.529 }, 00:08:55.529 { 00:08:55.529 "nbd_device": "/dev/nbd12", 00:08:55.529 "bdev_name": "Nvme2n3" 00:08:55.529 }, 00:08:55.529 { 00:08:55.529 "nbd_device": "/dev/nbd13", 00:08:55.529 "bdev_name": "Nvme3n1" 00:08:55.529 } 00:08:55.529 ]' 00:08:55.529 18:10:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:55.787 /dev/nbd1 00:08:55.787 /dev/nbd10 00:08:55.787 /dev/nbd11 00:08:55.787 /dev/nbd12 00:08:55.787 /dev/nbd13' 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:55.787 /dev/nbd1 00:08:55.787 /dev/nbd10 00:08:55.787 /dev/nbd11 00:08:55.787 /dev/nbd12 00:08:55.787 /dev/nbd13' 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:55.787 256+0 records in 00:08:55.787 256+0 records out 00:08:55.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00831356 s, 126 MB/s 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:55.787 256+0 records in 00:08:55.787 256+0 records out 00:08:55.787 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170104 s, 6.2 MB/s 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:55.787 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:56.044 256+0 records in 00:08:56.044 256+0 records out 00:08:56.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154926 s, 6.8 MB/s 00:08:56.044 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:56.044 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:56.302 256+0 records in 00:08:56.303 256+0 records out 00:08:56.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165638 s, 6.3 MB/s 00:08:56.303 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:56.303 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:56.303 256+0 records in 00:08:56.303 256+0 records out 00:08:56.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148909 s, 7.0 MB/s 00:08:56.303 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:56.303 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:56.561 256+0 records in 00:08:56.561 256+0 records out 00:08:56.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160801 s, 6.5 MB/s 00:08:56.561 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:56.561 18:10:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:56.818 256+0 records in 00:08:56.818 256+0 records out 00:08:56.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160791 s, 6.5 MB/s 00:08:56.818 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:56.818 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:56.818 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:56.818 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:56.818 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:56.818 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.819 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:57.077 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:57.077 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:57.077 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:57.077 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:57.077 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:57.077 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:57.077 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:57.077 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:57.077 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:57.077 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:57.335 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:57.335 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:57.335 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:57.335 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:57.335 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:57.335 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:57.335 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:57.335 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:57.335 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:57.335 18:10:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:57.593 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:57.593 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:57.593 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:57.593 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:57.593 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:57.593 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:57.593 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:57.593 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:57.593 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:57.593 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:58.159 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:58.159 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:58.159 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:58.159 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.159 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.159 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:58.159 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:58.159 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.159 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.159 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:58.417 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:58.417 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:58.417 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:58.417 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.417 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.417 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:58.417 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:58.417 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.417 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.417 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:58.675 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:58.675 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:58.675 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:58.675 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.675 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.675 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:58.675 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:58.675 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.675 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:58.675 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.675 18:10:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:58.933 18:10:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:58.933 18:10:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:58.933 18:10:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:58.933 18:10:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:58.933 18:10:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:58.933 18:10:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:58.933 18:10:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:58.933 18:10:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:58.933 18:10:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:58.933 18:10:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:58.933 18:10:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:58.933 18:10:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:58.933 18:10:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:58.933 18:10:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.933 18:10:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:58.933 18:10:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:59.191 malloc_lvol_verify 00:08:59.192 18:10:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:59.450 599662c1-3eb5-49ce-90c8-0a736317741a 00:08:59.450 18:10:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:59.709 279ff182-3332-41f7-9a76-2142e505ca26 00:08:59.709 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:59.968 /dev/nbd0 00:08:59.968 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:59.968 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:59.968 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:59.968 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:59.968 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:59.968 mke2fs 1.47.0 (5-Feb-2023) 00:08:59.968 Discarding device blocks: 0/4096 done 00:08:59.968 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:00.226 00:09:00.226 Allocating group tables: 0/1 done 00:09:00.226 Writing inode tables: 0/1 done 00:09:00.226 Creating journal (1024 blocks): done 00:09:00.226 Writing superblocks and filesystem accounting information: 0/1 done 00:09:00.226 00:09:00.226 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:00.226 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.226 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:00.226 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:00.226 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:00.226 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:00.226 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61296 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61296 ']' 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61296 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61296 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.484 killing process with pid 61296 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61296' 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61296 00:09:00.484 18:10:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61296 00:09:01.499 18:10:35 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:01.499 00:09:01.499 real 0m13.463s 00:09:01.499 user 0m19.259s 00:09:01.499 sys 0m4.341s 00:09:01.499 18:10:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.499 18:10:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:01.499 ************************************ 00:09:01.499 END TEST bdev_nbd 00:09:01.499 ************************************ 00:09:01.499 18:10:35 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:09:01.499 18:10:35 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:09:01.499 skipping fio tests on NVMe due to multi-ns failures. 00:09:01.499 18:10:35 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:01.499 18:10:35 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:01.499 18:10:35 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:01.499 18:10:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:01.499 18:10:35 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.499 18:10:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:01.499 ************************************ 00:09:01.499 START TEST bdev_verify 00:09:01.499 ************************************ 00:09:01.499 18:10:35 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:01.757 [2024-11-26 18:10:36.032598] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:09:01.757 [2024-11-26 18:10:36.032780] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61707 ] 00:09:02.014 [2024-11-26 18:10:36.217964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:02.014 [2024-11-26 18:10:36.379432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.014 [2024-11-26 18:10:36.379439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.947 Running I/O for 5 seconds... 00:09:04.815 19392.00 IOPS, 75.75 MiB/s [2024-11-26T18:10:40.648Z] 18944.00 IOPS, 74.00 MiB/s [2024-11-26T18:10:41.580Z] 19093.33 IOPS, 74.58 MiB/s [2024-11-26T18:10:42.515Z] 18832.00 IOPS, 73.56 MiB/s [2024-11-26T18:10:42.515Z] 18752.00 IOPS, 73.25 MiB/s 00:09:08.054 Latency(us) 00:09:08.054 [2024-11-26T18:10:42.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.054 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:08.054 Verification LBA range: start 0x0 length 0xbd0bd 00:09:08.054 Nvme0n1 : 5.11 1553.87 6.07 0.00 0.00 82182.57 17396.83 78643.20 00:09:08.054 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:08.054 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:08.054 Nvme0n1 : 5.07 1526.09 5.96 0.00 0.00 83415.23 9115.46 128688.87 00:09:08.054 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:08.054 Verification LBA range: start 0x0 length 0xa0000 00:09:08.054 Nvme1n1 : 5.11 1553.31 6.07 0.00 0.00 82093.96 15847.80 76260.07 00:09:08.054 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:08.054 Verification LBA range: start 0xa0000 length 0xa0000 00:09:08.054 Nvme1n1 : 5.09 1532.65 5.99 0.00 0.00 83137.38 16443.58 121539.49 00:09:08.054 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:08.054 Verification LBA range: start 0x0 length 0x80000 00:09:08.054 Nvme2n1 : 5.11 1552.75 6.07 0.00 0.00 81936.60 16086.11 72447.07 00:09:08.054 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:08.054 Verification LBA range: start 0x80000 length 0x80000 00:09:08.054 Nvme2n1 : 5.10 1532.17 5.99 0.00 0.00 82988.71 15966.95 133455.13 00:09:08.054 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:08.054 Verification LBA range: start 0x0 length 0x80000 00:09:08.054 Nvme2n2 : 5.11 1551.52 6.06 0.00 0.00 81825.93 18469.24 70063.94 00:09:08.054 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:08.054 Verification LBA range: start 0x80000 length 0x80000 00:09:08.054 Nvme2n2 : 5.10 1531.68 5.98 0.00 0.00 82850.88 15609.48 133455.13 00:09:08.054 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:08.054 Verification LBA range: start 0x0 length 0x80000 00:09:08.054 Nvme2n3 : 5.12 1550.29 6.06 0.00 0.00 81713.31 17277.67 72447.07 00:09:08.054 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:08.054 Verification LBA range: start 0x80000 length 0x80000 00:09:08.054 Nvme2n3 : 5.10 1531.18 5.98 0.00 0.00 82720.17 15192.44 133455.13 00:09:08.054 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:08.054 Verification LBA range: start 0x0 length 0x20000 00:09:08.054 Nvme3n1 : 5.12 1549.08 6.05 0.00 0.00 81601.36 10187.87 76736.70 00:09:08.054 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:08.054 Verification LBA range: start 0x20000 length 0x20000 00:09:08.054 Nvme3n1 : 5.10 1530.65 5.98 0.00 0.00 82588.90 11021.96 132501.88 00:09:08.054 [2024-11-26T18:10:42.515Z] =================================================================================================================== 00:09:08.054 [2024-11-26T18:10:42.515Z] Total : 18495.24 72.25 0.00 0.00 82416.27 9115.46 133455.13 00:09:09.428 00:09:09.428 real 0m7.732s 00:09:09.428 user 0m14.214s 00:09:09.428 sys 0m0.321s 00:09:09.428 18:10:43 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.428 ************************************ 00:09:09.428 END TEST bdev_verify 00:09:09.428 18:10:43 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:09.428 ************************************ 00:09:09.428 18:10:43 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:09.428 18:10:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:09.428 18:10:43 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.428 18:10:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.428 ************************************ 00:09:09.428 START TEST bdev_verify_big_io 00:09:09.428 ************************************ 00:09:09.428 18:10:43 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:09.428 [2024-11-26 18:10:43.810936] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:09:09.428 [2024-11-26 18:10:43.811115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61805 ] 00:09:09.686 [2024-11-26 18:10:43.994090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:09.948 [2024-11-26 18:10:44.170332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.948 [2024-11-26 18:10:44.170341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.882 Running I/O for 5 seconds... 00:09:16.064 1996.00 IOPS, 124.75 MiB/s [2024-11-26T18:10:51.092Z] 2730.50 IOPS, 170.66 MiB/s [2024-11-26T18:10:51.092Z] 3179.33 IOPS, 198.71 MiB/s 00:09:16.631 Latency(us) 00:09:16.631 [2024-11-26T18:10:51.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.632 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:16.632 Verification LBA range: start 0x0 length 0xbd0b 00:09:16.632 Nvme0n1 : 5.67 133.77 8.36 0.00 0.00 931181.28 20614.05 945624.90 00:09:16.632 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:16.632 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:16.632 Nvme0n1 : 5.68 131.82 8.24 0.00 0.00 929888.78 13881.72 960876.92 00:09:16.632 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:16.632 Verification LBA range: start 0x0 length 0xa000 00:09:16.632 Nvme1n1 : 5.67 131.62 8.23 0.00 0.00 914001.83 65297.69 983754.94 00:09:16.632 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:16.632 Verification LBA range: start 0xa000 length 0xa000 00:09:16.632 Nvme1n1 : 5.68 121.41 7.59 0.00 0.00 971304.65 82932.83 1525201.45 00:09:16.632 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:16.632 Verification LBA range: start 0x0 length 0x8000 00:09:16.632 Nvme2n1 : 5.67 135.44 8.46 0.00 0.00 870255.09 75783.45 835047.80 00:09:16.632 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:16.632 Verification LBA range: start 0x8000 length 0x8000 00:09:16.632 Nvme2n1 : 5.69 124.36 7.77 0.00 0.00 930052.33 102474.47 1548079.48 00:09:16.632 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:16.632 Verification LBA range: start 0x0 length 0x8000 00:09:16.632 Nvme2n2 : 5.76 137.58 8.60 0.00 0.00 826813.97 77213.32 991380.95 00:09:16.632 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:16.632 Verification LBA range: start 0x8000 length 0x8000 00:09:16.632 Nvme2n2 : 5.81 136.89 8.56 0.00 0.00 832963.68 28716.68 1578583.51 00:09:16.632 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:16.632 Verification LBA range: start 0x0 length 0x8000 00:09:16.632 Nvme2n3 : 5.81 146.90 9.18 0.00 0.00 760341.50 29074.15 1326925.27 00:09:16.632 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:16.632 Verification LBA range: start 0x8000 length 0x8000 00:09:16.632 Nvme2n3 : 5.83 140.04 8.75 0.00 0.00 790509.86 37653.41 1609087.53 00:09:16.632 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:16.632 Verification LBA range: start 0x0 length 0x2000 00:09:16.632 Nvme3n1 : 5.83 157.37 9.84 0.00 0.00 691870.49 1653.29 1029510.98 00:09:16.632 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:16.632 Verification LBA range: start 0x2000 length 0x2000 00:09:16.632 Nvme3n1 : 5.86 161.11 10.07 0.00 0.00 670786.11 4051.32 1647217.57 00:09:16.632 [2024-11-26T18:10:51.093Z] =================================================================================================================== 00:09:16.632 [2024-11-26T18:10:51.093Z] Total : 1658.31 103.64 0.00 0.00 834660.50 1653.29 1647217.57 00:09:18.533 00:09:18.533 real 0m9.134s 00:09:18.533 user 0m16.935s 00:09:18.533 sys 0m0.375s 00:09:18.533 18:10:52 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.533 18:10:52 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:18.533 ************************************ 00:09:18.533 END TEST bdev_verify_big_io 00:09:18.533 ************************************ 00:09:18.533 18:10:52 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:18.533 18:10:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:18.533 18:10:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.533 18:10:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:18.533 ************************************ 00:09:18.533 START TEST bdev_write_zeroes 00:09:18.533 ************************************ 00:09:18.533 18:10:52 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:18.792 [2024-11-26 18:10:52.995083] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:09:18.792 [2024-11-26 18:10:52.995234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61925 ] 00:09:18.792 [2024-11-26 18:10:53.178274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.050 [2024-11-26 18:10:53.340322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.615 Running I/O for 1 seconds... 00:09:20.988 49856.00 IOPS, 194.75 MiB/s 00:09:20.988 Latency(us) 00:09:20.988 [2024-11-26T18:10:55.449Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.988 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:20.988 Nvme0n1 : 1.07 7919.64 30.94 0.00 0.00 16115.86 10128.29 82456.20 00:09:20.988 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:20.988 Nvme1n1 : 1.08 7908.40 30.89 0.00 0.00 16113.75 10724.07 78166.57 00:09:20.988 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:20.988 Nvme2n1 : 1.08 7897.11 30.85 0.00 0.00 16089.36 9413.35 81502.95 00:09:20.988 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:20.988 Nvme2n2 : 1.08 7885.86 30.80 0.00 0.00 16068.86 8281.37 87699.08 00:09:20.988 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:20.988 Nvme2n3 : 1.08 7874.69 30.76 0.00 0.00 16056.33 8102.63 86745.83 00:09:20.988 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:20.988 Nvme3n1 : 1.07 7812.08 30.52 0.00 0.00 16200.63 11736.90 85792.58 00:09:20.988 [2024-11-26T18:10:55.449Z] =================================================================================================================== 00:09:20.988 [2024-11-26T18:10:55.449Z] Total : 47297.77 184.76 0.00 0.00 16107.23 8102.63 87699.08 00:09:21.922 00:09:21.922 real 0m3.410s 00:09:21.922 user 0m2.997s 00:09:21.922 sys 0m0.286s 00:09:21.922 18:10:56 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.922 ************************************ 00:09:21.922 END TEST bdev_write_zeroes 00:09:21.922 18:10:56 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:21.922 ************************************ 00:09:21.922 18:10:56 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:21.922 18:10:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:21.922 18:10:56 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.922 18:10:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:21.922 ************************************ 00:09:21.922 START TEST bdev_json_nonenclosed 00:09:21.922 ************************************ 00:09:21.922 18:10:56 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:22.180 [2024-11-26 18:10:56.457657] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:09:22.180 [2024-11-26 18:10:56.457841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61984 ] 00:09:22.180 [2024-11-26 18:10:56.637406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.438 [2024-11-26 18:10:56.773003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.438 [2024-11-26 18:10:56.773124] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:22.438 [2024-11-26 18:10:56.773153] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:22.438 [2024-11-26 18:10:56.773167] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:22.697 00:09:22.697 real 0m0.674s 00:09:22.697 user 0m0.442s 00:09:22.697 sys 0m0.128s 00:09:22.697 18:10:57 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.697 18:10:57 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:22.697 ************************************ 00:09:22.697 END TEST bdev_json_nonenclosed 00:09:22.697 ************************************ 00:09:22.697 18:10:57 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:22.697 18:10:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:22.697 18:10:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.697 18:10:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.697 ************************************ 00:09:22.697 START TEST bdev_json_nonarray 00:09:22.697 ************************************ 00:09:22.697 18:10:57 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:22.966 [2024-11-26 18:10:57.205291] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:09:22.966 [2024-11-26 18:10:57.205580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62012 ] 00:09:22.966 [2024-11-26 18:10:57.388092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.235 [2024-11-26 18:10:57.531886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.235 [2024-11-26 18:10:57.532017] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:23.235 [2024-11-26 18:10:57.532048] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:23.235 [2024-11-26 18:10:57.532062] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:23.494 00:09:23.494 real 0m0.733s 00:09:23.494 user 0m0.481s 00:09:23.494 sys 0m0.142s 00:09:23.494 18:10:57 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.494 18:10:57 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:23.495 ************************************ 00:09:23.495 END TEST bdev_json_nonarray 00:09:23.495 ************************************ 00:09:23.495 18:10:57 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:09:23.495 18:10:57 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:09:23.495 18:10:57 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:09:23.495 18:10:57 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:09:23.495 18:10:57 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:09:23.495 18:10:57 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:23.495 18:10:57 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:23.495 18:10:57 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:09:23.495 18:10:57 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:09:23.495 18:10:57 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:09:23.495 18:10:57 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:09:23.495 00:09:23.495 real 0m45.086s 00:09:23.495 user 1m8.052s 00:09:23.495 sys 0m7.392s 00:09:23.495 18:10:57 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.495 18:10:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:23.495 ************************************ 00:09:23.495 END TEST blockdev_nvme 00:09:23.495 ************************************ 00:09:23.495 18:10:57 -- spdk/autotest.sh@209 -- # uname -s 00:09:23.495 18:10:57 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:09:23.495 18:10:57 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:23.495 18:10:57 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:23.495 18:10:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.495 18:10:57 -- common/autotest_common.sh@10 -- # set +x 00:09:23.495 ************************************ 00:09:23.495 START TEST blockdev_nvme_gpt 00:09:23.495 ************************************ 00:09:23.495 18:10:57 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:23.755 * Looking for test storage... 00:09:23.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:23.755 18:10:57 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:23.755 18:10:57 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:23.755 18:10:57 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:09:23.755 18:10:58 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.755 18:10:58 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:09:23.755 18:10:58 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.755 18:10:58 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:23.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.755 --rc genhtml_branch_coverage=1 00:09:23.755 --rc genhtml_function_coverage=1 00:09:23.755 --rc genhtml_legend=1 00:09:23.755 --rc geninfo_all_blocks=1 00:09:23.755 --rc geninfo_unexecuted_blocks=1 00:09:23.755 00:09:23.755 ' 00:09:23.755 18:10:58 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:23.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.755 --rc genhtml_branch_coverage=1 00:09:23.755 --rc genhtml_function_coverage=1 00:09:23.755 --rc genhtml_legend=1 00:09:23.755 --rc geninfo_all_blocks=1 00:09:23.755 --rc geninfo_unexecuted_blocks=1 00:09:23.755 00:09:23.755 ' 00:09:23.755 18:10:58 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:23.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.755 --rc genhtml_branch_coverage=1 00:09:23.755 --rc genhtml_function_coverage=1 00:09:23.755 --rc genhtml_legend=1 00:09:23.755 --rc geninfo_all_blocks=1 00:09:23.755 --rc geninfo_unexecuted_blocks=1 00:09:23.755 00:09:23.755 ' 00:09:23.755 18:10:58 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:23.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.755 --rc genhtml_branch_coverage=1 00:09:23.755 --rc genhtml_function_coverage=1 00:09:23.755 --rc genhtml_legend=1 00:09:23.755 --rc geninfo_all_blocks=1 00:09:23.755 --rc geninfo_unexecuted_blocks=1 00:09:23.755 00:09:23.755 ' 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62094 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62094 00:09:23.755 18:10:58 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:23.755 18:10:58 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62094 ']' 00:09:23.755 18:10:58 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.755 18:10:58 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.755 18:10:58 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.755 18:10:58 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.755 18:10:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:24.013 [2024-11-26 18:10:58.251483] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:09:24.013 [2024-11-26 18:10:58.251708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62094 ] 00:09:24.013 [2024-11-26 18:10:58.446092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.271 [2024-11-26 18:10:58.603331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.205 18:10:59 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.205 18:10:59 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:09:25.205 18:10:59 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:09:25.205 18:10:59 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:09:25.205 18:10:59 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:25.464 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:25.722 Waiting for block devices as requested 00:09:25.722 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.722 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.981 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:25.981 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.245 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:31.245 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:09:31.245 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:31.246 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:31.246 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:31.246 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:09:31.246 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:09:31.246 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:31.246 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:31.246 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:31.246 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:09:31.246 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:09:31.246 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:31.246 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:31.246 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:09:31.246 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:09:31.246 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:09:31.246 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:31.246 18:11:05 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:09:31.246 BYT; 00:09:31.246 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:09:31.246 BYT; 00:09:31.246 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:31.246 18:11:05 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:31.246 18:11:05 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:09:32.179 The operation has completed successfully. 00:09:32.179 18:11:06 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:09:33.114 The operation has completed successfully. 00:09:33.114 18:11:07 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:33.681 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:34.247 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.247 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.247 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.247 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:34.505 18:11:08 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:09:34.505 18:11:08 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.505 18:11:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.505 [] 00:09:34.505 18:11:08 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.505 18:11:08 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:09:34.505 18:11:08 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:09:34.505 18:11:08 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:34.505 18:11:08 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:34.505 18:11:08 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:34.505 18:11:08 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.505 18:11:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.764 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.764 18:11:09 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:09:34.764 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.764 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.764 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.764 18:11:09 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:09:34.764 18:11:09 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:09:34.764 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.764 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.764 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.764 18:11:09 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:09:34.764 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.764 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.764 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.764 18:11:09 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:34.764 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.764 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.764 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.764 18:11:09 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:09:34.764 18:11:09 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:09:34.764 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.764 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:34.764 18:11:09 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:09:35.022 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.022 18:11:09 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:09:35.022 18:11:09 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:09:35.023 18:11:09 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "f9fbe20b-5067-4808-aea6-1bfdcede94ea"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f9fbe20b-5067-4808-aea6-1bfdcede94ea",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2ff88643-c408-4712-af9c-5060324e18dd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2ff88643-c408-4712-af9c-5060324e18dd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "305984f1-1c65-414a-8f53-e392548e22a4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "305984f1-1c65-414a-8f53-e392548e22a4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "72f921c5-0ec2-4331-a64b-fc4da46e1129"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "72f921c5-0ec2-4331-a64b-fc4da46e1129",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "40d8c9d3-1a8b-4222-a31b-5f4350ada64a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "40d8c9d3-1a8b-4222-a31b-5f4350ada64a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:35.023 18:11:09 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:09:35.023 18:11:09 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:09:35.023 18:11:09 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:09:35.023 18:11:09 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62094 00:09:35.023 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62094 ']' 00:09:35.023 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62094 00:09:35.023 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:09:35.023 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.023 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62094 00:09:35.023 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.023 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.023 killing process with pid 62094 00:09:35.023 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62094' 00:09:35.023 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62094 00:09:35.023 18:11:09 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62094 00:09:37.584 18:11:11 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:37.584 18:11:11 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:37.584 18:11:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:37.584 18:11:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.584 18:11:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:37.584 ************************************ 00:09:37.584 START TEST bdev_hello_world 00:09:37.584 ************************************ 00:09:37.584 18:11:11 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:37.584 [2024-11-26 18:11:11.717583] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:09:37.584 [2024-11-26 18:11:11.717762] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62730 ] 00:09:37.584 [2024-11-26 18:11:11.910623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.841 [2024-11-26 18:11:12.065169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.403 [2024-11-26 18:11:12.742325] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:38.403 [2024-11-26 18:11:12.742392] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:38.403 [2024-11-26 18:11:12.742427] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:38.403 [2024-11-26 18:11:12.745621] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:38.403 [2024-11-26 18:11:12.746129] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:38.403 [2024-11-26 18:11:12.746167] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:38.403 [2024-11-26 18:11:12.746422] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:38.403 00:09:38.403 [2024-11-26 18:11:12.746453] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:39.772 00:09:39.772 real 0m2.206s 00:09:39.772 user 0m1.801s 00:09:39.772 sys 0m0.292s 00:09:39.772 18:11:13 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.772 ************************************ 00:09:39.772 END TEST bdev_hello_world 00:09:39.772 ************************************ 00:09:39.772 18:11:13 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:39.772 18:11:13 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:09:39.772 18:11:13 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:39.772 18:11:13 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.772 18:11:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:39.772 ************************************ 00:09:39.772 START TEST bdev_bounds 00:09:39.772 ************************************ 00:09:39.772 18:11:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:09:39.772 18:11:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62772 00:09:39.772 18:11:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:39.772 Process bdevio pid: 62772 00:09:39.772 18:11:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62772' 00:09:39.772 18:11:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:39.772 18:11:13 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62772 00:09:39.772 18:11:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62772 ']' 00:09:39.772 18:11:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.772 18:11:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.772 18:11:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.772 18:11:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.772 18:11:13 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:39.772 [2024-11-26 18:11:13.966002] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:09:39.772 [2024-11-26 18:11:13.966188] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62772 ] 00:09:39.772 [2024-11-26 18:11:14.143255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:40.038 [2024-11-26 18:11:14.294451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.038 [2024-11-26 18:11:14.294541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.038 [2024-11-26 18:11:14.294588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.622 18:11:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.622 18:11:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:09:40.622 18:11:15 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:40.881 I/O targets: 00:09:40.881 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:40.881 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:09:40.881 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:09:40.881 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:40.881 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:40.881 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:40.881 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:40.881 00:09:40.881 00:09:40.881 CUnit - A unit testing framework for C - Version 2.1-3 00:09:40.881 http://cunit.sourceforge.net/ 00:09:40.881 00:09:40.881 00:09:40.881 Suite: bdevio tests on: Nvme3n1 00:09:40.881 Test: blockdev write read block ...passed 00:09:40.881 Test: blockdev write zeroes read block ...passed 00:09:40.881 Test: blockdev write zeroes read no split ...passed 00:09:40.881 Test: blockdev write zeroes read split ...passed 00:09:40.881 Test: blockdev write zeroes read split partial ...passed 00:09:40.881 Test: blockdev reset ...[2024-11-26 18:11:15.199295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:09:40.881 [2024-11-26 18:11:15.203371] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:09:40.881 passed 00:09:40.881 Test: blockdev write read 8 blocks ...passed 00:09:40.881 Test: blockdev write read size > 128k ...passed 00:09:40.881 Test: blockdev write read invalid size ...passed 00:09:40.881 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:40.881 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:40.881 Test: blockdev write read max offset ...passed 00:09:40.881 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:40.881 Test: blockdev writev readv 8 blocks ...passed 00:09:40.881 Test: blockdev writev readv 30 x 1block ...passed 00:09:40.881 Test: blockdev writev readv block ...passed 00:09:40.881 Test: blockdev writev readv size > 128k ...passed 00:09:40.881 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:40.881 Test: blockdev comparev and writev ...[2024-11-26 18:11:15.211885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2804000 len:0x1000 00:09:40.881 [2024-11-26 18:11:15.211954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:40.881 passed 00:09:40.881 Test: blockdev nvme passthru rw ...passed 00:09:40.881 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:11:15.212819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:40.881 passed 00:09:40.881 Test: blockdev nvme admin passthru ...[2024-11-26 18:11:15.212864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:40.881 passed 00:09:40.881 Test: blockdev copy ...passed 00:09:40.881 Suite: bdevio tests on: Nvme2n3 00:09:40.881 Test: blockdev write read block ...passed 00:09:40.881 Test: blockdev write zeroes read block ...passed 00:09:40.881 Test: blockdev write zeroes read no split ...passed 00:09:40.881 Test: blockdev write zeroes read split ...passed 00:09:40.881 Test: blockdev write zeroes read split partial ...passed 00:09:40.881 Test: blockdev reset ...[2024-11-26 18:11:15.305761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:40.881 [2024-11-26 18:11:15.309936] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:40.881 passed 00:09:40.881 Test: blockdev write read 8 blocks ...passed 00:09:40.881 Test: blockdev write read size > 128k ...passed 00:09:40.881 Test: blockdev write read invalid size ...passed 00:09:40.881 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:40.881 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:40.881 Test: blockdev write read max offset ...passed 00:09:40.881 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:40.881 Test: blockdev writev readv 8 blocks ...passed 00:09:40.881 Test: blockdev writev readv 30 x 1block ...passed 00:09:40.881 Test: blockdev writev readv block ...passed 00:09:40.881 Test: blockdev writev readv size > 128k ...passed 00:09:40.881 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:40.881 Test: blockdev comparev and writev ...[2024-11-26 18:11:15.318715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c2802000 len:0x1000 00:09:40.881 [2024-11-26 18:11:15.318779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:40.881 passed 00:09:40.881 Test: blockdev nvme passthru rw ...passed 00:09:40.881 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:11:15.319729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:40.881 passed 00:09:40.881 Test: blockdev nvme admin passthru ...[2024-11-26 18:11:15.319772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:40.881 passed 00:09:40.881 Test: blockdev copy ...passed 00:09:40.881 Suite: bdevio tests on: Nvme2n2 00:09:40.881 Test: blockdev write read block ...passed 00:09:40.881 Test: blockdev write zeroes read block ...passed 00:09:40.881 Test: blockdev write zeroes read no split ...passed 00:09:41.140 Test: blockdev write zeroes read split ...passed 00:09:41.140 Test: blockdev write zeroes read split partial ...passed 00:09:41.140 Test: blockdev reset ...[2024-11-26 18:11:15.383566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:41.140 [2024-11-26 18:11:15.387802] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:41.140 passed 00:09:41.140 Test: blockdev write read 8 blocks ...passed 00:09:41.140 Test: blockdev write read size > 128k ...passed 00:09:41.140 Test: blockdev write read invalid size ...passed 00:09:41.140 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:41.140 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:41.140 Test: blockdev write read max offset ...passed 00:09:41.140 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:41.140 Test: blockdev writev readv 8 blocks ...passed 00:09:41.140 Test: blockdev writev readv 30 x 1block ...passed 00:09:41.141 Test: blockdev writev readv block ...passed 00:09:41.141 Test: blockdev writev readv size > 128k ...passed 00:09:41.141 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:41.141 Test: blockdev comparev and writev ...[2024-11-26 18:11:15.395572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6e38000 len:0x1000 00:09:41.141 [2024-11-26 18:11:15.395630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:41.141 passed 00:09:41.141 Test: blockdev nvme passthru rw ...passed 00:09:41.141 Test: blockdev nvme passthru vendor specific ...passed 00:09:41.141 Test: blockdev nvme admin passthru ...[2024-11-26 18:11:15.396491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:41.141 [2024-11-26 18:11:15.396530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:41.141 passed 00:09:41.141 Test: blockdev copy ...passed 00:09:41.141 Suite: bdevio tests on: Nvme2n1 00:09:41.141 Test: blockdev write read block ...passed 00:09:41.141 Test: blockdev write zeroes read block ...passed 00:09:41.141 Test: blockdev write zeroes read no split ...passed 00:09:41.141 Test: blockdev write zeroes read split ...passed 00:09:41.141 Test: blockdev write zeroes read split partial ...passed 00:09:41.141 Test: blockdev reset ...[2024-11-26 18:11:15.463143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:41.141 [2024-11-26 18:11:15.467162] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:41.141 passed 00:09:41.141 Test: blockdev write read 8 blocks ...passed 00:09:41.141 Test: blockdev write read size > 128k ...passed 00:09:41.141 Test: blockdev write read invalid size ...passed 00:09:41.141 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:41.141 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:41.141 Test: blockdev write read max offset ...passed 00:09:41.141 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:41.141 Test: blockdev writev readv 8 blocks ...passed 00:09:41.141 Test: blockdev writev readv 30 x 1block ...passed 00:09:41.141 Test: blockdev writev readv block ...passed 00:09:41.141 Test: blockdev writev readv size > 128k ...passed 00:09:41.141 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:41.141 Test: blockdev comparev and writev ...[2024-11-26 18:11:15.474680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6e34000 len:0x1000 00:09:41.141 [2024-11-26 18:11:15.474750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:41.141 passed 00:09:41.141 Test: blockdev nvme passthru rw ...passed 00:09:41.141 Test: blockdev nvme passthru vendor specific ...passed 00:09:41.141 Test: blockdev nvme admin passthru ...[2024-11-26 18:11:15.475528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:41.141 [2024-11-26 18:11:15.475581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:41.141 passed 00:09:41.141 Test: blockdev copy ...passed 00:09:41.141 Suite: bdevio tests on: Nvme1n1p2 00:09:41.141 Test: blockdev write read block ...passed 00:09:41.141 Test: blockdev write zeroes read block ...passed 00:09:41.141 Test: blockdev write zeroes read no split ...passed 00:09:41.141 Test: blockdev write zeroes read split ...passed 00:09:41.141 Test: blockdev write zeroes read split partial ...passed 00:09:41.141 Test: blockdev reset ...[2024-11-26 18:11:15.542844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:41.141 [2024-11-26 18:11:15.546551] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:41.141 passed 00:09:41.141 Test: blockdev write read 8 blocks ...passed 00:09:41.141 Test: blockdev write read size > 128k ...passed 00:09:41.141 Test: blockdev write read invalid size ...passed 00:09:41.141 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:41.141 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:41.141 Test: blockdev write read max offset ...passed 00:09:41.141 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:41.141 Test: blockdev writev readv 8 blocks ...passed 00:09:41.141 Test: blockdev writev readv 30 x 1block ...passed 00:09:41.141 Test: blockdev writev readv block ...passed 00:09:41.141 Test: blockdev writev readv size > 128k ...passed 00:09:41.141 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:41.141 Test: blockdev comparev and writev ...[2024-11-26 18:11:15.555053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d6e30000 len:0x1000 00:09:41.141 [2024-11-26 18:11:15.555110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:41.141 passed 00:09:41.141 Test: blockdev nvme passthru rw ...passed 00:09:41.141 Test: blockdev nvme passthru vendor specific ...passed 00:09:41.141 Test: blockdev nvme admin passthru ...passed 00:09:41.141 Test: blockdev copy ...passed 00:09:41.141 Suite: bdevio tests on: Nvme1n1p1 00:09:41.141 Test: blockdev write read block ...passed 00:09:41.141 Test: blockdev write zeroes read block ...passed 00:09:41.141 Test: blockdev write zeroes read no split ...passed 00:09:41.141 Test: blockdev write zeroes read split ...passed 00:09:41.400 Test: blockdev write zeroes read split partial ...passed 00:09:41.400 Test: blockdev reset ...[2024-11-26 18:11:15.610609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:41.400 [2024-11-26 18:11:15.614340] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:41.400 passed 00:09:41.400 Test: blockdev write read 8 blocks ...passed 00:09:41.400 Test: blockdev write read size > 128k ...passed 00:09:41.400 Test: blockdev write read invalid size ...passed 00:09:41.400 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:41.400 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:41.400 Test: blockdev write read max offset ...passed 00:09:41.400 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:41.400 Test: blockdev writev readv 8 blocks ...passed 00:09:41.400 Test: blockdev writev readv 30 x 1block ...passed 00:09:41.400 Test: blockdev writev readv block ...passed 00:09:41.400 Test: blockdev writev readv size > 128k ...passed 00:09:41.400 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:41.400 Test: blockdev comparev and writev ...[2024-11-26 18:11:15.622286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c320e000 len:0x1000 00:09:41.400 [2024-11-26 18:11:15.622345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:41.400 passed 00:09:41.400 Test: blockdev nvme passthru rw ...passed 00:09:41.400 Test: blockdev nvme passthru vendor specific ...passed 00:09:41.400 Test: blockdev nvme admin passthru ...passed 00:09:41.400 Test: blockdev copy ...passed 00:09:41.400 Suite: bdevio tests on: Nvme0n1 00:09:41.400 Test: blockdev write read block ...passed 00:09:41.400 Test: blockdev write zeroes read block ...passed 00:09:41.400 Test: blockdev write zeroes read no split ...passed 00:09:41.400 Test: blockdev write zeroes read split ...passed 00:09:41.400 Test: blockdev write zeroes read split partial ...passed 00:09:41.400 Test: blockdev reset ...[2024-11-26 18:11:15.677770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:41.400 [2024-11-26 18:11:15.681397] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:41.400 passed 00:09:41.400 Test: blockdev write read 8 blocks ...passed 00:09:41.400 Test: blockdev write read size > 128k ...passed 00:09:41.400 Test: blockdev write read invalid size ...passed 00:09:41.400 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:41.400 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:41.400 Test: blockdev write read max offset ...passed 00:09:41.400 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:41.400 Test: blockdev writev readv 8 blocks ...passed 00:09:41.400 Test: blockdev writev readv 30 x 1block ...passed 00:09:41.400 Test: blockdev writev readv block ...passed 00:09:41.400 Test: blockdev writev readv size > 128k ...passed 00:09:41.400 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:41.400 Test: blockdev comparev and writev ...[2024-11-26 18:11:15.688031] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:41.400 separate metadata which is not supported yet. 00:09:41.400 passed 00:09:41.400 Test: blockdev nvme passthru rw ...passed 00:09:41.400 Test: blockdev nvme passthru vendor specific ...[2024-11-26 18:11:15.688538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:41.400 [2024-11-26 18:11:15.688598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:41.400 passed 00:09:41.400 Test: blockdev nvme admin passthru ...passed 00:09:41.400 Test: blockdev copy ...passed 00:09:41.400 00:09:41.401 Run Summary: Type Total Ran Passed Failed Inactive 00:09:41.401 suites 7 7 n/a 0 0 00:09:41.401 tests 161 161 161 0 0 00:09:41.401 asserts 1025 1025 1025 0 n/a 00:09:41.401 00:09:41.401 Elapsed time = 1.508 seconds 00:09:41.401 0 00:09:41.401 18:11:15 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62772 00:09:41.401 18:11:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62772 ']' 00:09:41.401 18:11:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62772 00:09:41.401 18:11:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:09:41.401 18:11:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.401 18:11:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62772 00:09:41.401 18:11:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.401 killing process with pid 62772 00:09:41.401 18:11:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.401 18:11:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62772' 00:09:41.401 18:11:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62772 00:09:41.401 18:11:15 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62772 00:09:42.334 18:11:16 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:42.334 00:09:42.334 real 0m2.871s 00:09:42.334 user 0m7.425s 00:09:42.334 sys 0m0.447s 00:09:42.334 ************************************ 00:09:42.334 END TEST bdev_bounds 00:09:42.334 ************************************ 00:09:42.334 18:11:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.334 18:11:16 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:42.334 18:11:16 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:42.334 18:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:42.334 18:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.334 18:11:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:42.593 ************************************ 00:09:42.593 START TEST bdev_nbd 00:09:42.593 ************************************ 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62838 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62838 /var/tmp/spdk-nbd.sock 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62838 ']' 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:42.593 18:11:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.594 18:11:16 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:42.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:42.594 18:11:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:42.594 18:11:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.594 18:11:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:42.594 [2024-11-26 18:11:16.913300] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:09:42.594 [2024-11-26 18:11:16.913504] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.852 [2024-11-26 18:11:17.102884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.852 [2024-11-26 18:11:17.238369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.785 18:11:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.785 18:11:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:09:43.785 18:11:17 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:43.785 18:11:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.785 18:11:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:43.785 18:11:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:43.785 18:11:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:43.785 18:11:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.785 18:11:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:43.785 18:11:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:43.785 18:11:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:43.785 18:11:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:43.785 18:11:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:43.785 18:11:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:43.785 18:11:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.043 1+0 records in 00:09:44.043 1+0 records out 00:09:44.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447851 s, 9.1 MB/s 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:44.043 18:11:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:09:44.301 18:11:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:44.301 18:11:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:44.301 18:11:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:44.301 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:44.302 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:44.302 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:44.302 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:44.302 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:44.302 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:44.302 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:44.302 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:44.302 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.302 1+0 records in 00:09:44.302 1+0 records out 00:09:44.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000622494 s, 6.6 MB/s 00:09:44.302 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.302 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:44.302 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.302 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:44.302 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:44.302 18:11:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:44.302 18:11:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:44.302 18:11:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:09:44.561 18:11:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:44.561 18:11:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:44.561 18:11:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:44.561 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:09:44.561 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:44.561 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:44.562 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:44.562 18:11:18 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:09:44.562 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:44.562 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:44.562 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:44.562 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:44.562 1+0 records in 00:09:44.562 1+0 records out 00:09:44.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670461 s, 6.1 MB/s 00:09:44.562 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.562 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:44.562 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:44.562 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:44.562 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:44.562 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:44.562 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:44.562 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:45.128 1+0 records in 00:09:45.128 1+0 records out 00:09:45.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000868776 s, 4.7 MB/s 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:45.128 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:45.386 1+0 records in 00:09:45.386 1+0 records out 00:09:45.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628785 s, 6.5 MB/s 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:45.386 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:45.644 1+0 records in 00:09:45.644 1+0 records out 00:09:45.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000840475 s, 4.9 MB/s 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:45.644 18:11:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:45.902 1+0 records in 00:09:45.902 1+0 records out 00:09:45.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000677897 s, 6.0 MB/s 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:45.902 18:11:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:46.468 18:11:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:46.468 { 00:09:46.468 "nbd_device": "/dev/nbd0", 00:09:46.468 "bdev_name": "Nvme0n1" 00:09:46.468 }, 00:09:46.468 { 00:09:46.468 "nbd_device": "/dev/nbd1", 00:09:46.468 "bdev_name": "Nvme1n1p1" 00:09:46.468 }, 00:09:46.468 { 00:09:46.468 "nbd_device": "/dev/nbd2", 00:09:46.468 "bdev_name": "Nvme1n1p2" 00:09:46.468 }, 00:09:46.468 { 00:09:46.468 "nbd_device": "/dev/nbd3", 00:09:46.468 "bdev_name": "Nvme2n1" 00:09:46.468 }, 00:09:46.468 { 00:09:46.468 "nbd_device": "/dev/nbd4", 00:09:46.468 "bdev_name": "Nvme2n2" 00:09:46.468 }, 00:09:46.468 { 00:09:46.468 "nbd_device": "/dev/nbd5", 00:09:46.468 "bdev_name": "Nvme2n3" 00:09:46.468 }, 00:09:46.468 { 00:09:46.468 "nbd_device": "/dev/nbd6", 00:09:46.468 "bdev_name": "Nvme3n1" 00:09:46.468 } 00:09:46.468 ]' 00:09:46.469 18:11:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:46.469 18:11:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:46.469 { 00:09:46.469 "nbd_device": "/dev/nbd0", 00:09:46.469 "bdev_name": "Nvme0n1" 00:09:46.469 }, 00:09:46.469 { 00:09:46.469 "nbd_device": "/dev/nbd1", 00:09:46.469 "bdev_name": "Nvme1n1p1" 00:09:46.469 }, 00:09:46.469 { 00:09:46.469 "nbd_device": "/dev/nbd2", 00:09:46.469 "bdev_name": "Nvme1n1p2" 00:09:46.469 }, 00:09:46.469 { 00:09:46.469 "nbd_device": "/dev/nbd3", 00:09:46.469 "bdev_name": "Nvme2n1" 00:09:46.469 }, 00:09:46.469 { 00:09:46.469 "nbd_device": "/dev/nbd4", 00:09:46.469 "bdev_name": "Nvme2n2" 00:09:46.469 }, 00:09:46.469 { 00:09:46.469 "nbd_device": "/dev/nbd5", 00:09:46.469 "bdev_name": "Nvme2n3" 00:09:46.469 }, 00:09:46.469 { 00:09:46.469 "nbd_device": "/dev/nbd6", 00:09:46.469 "bdev_name": "Nvme3n1" 00:09:46.469 } 00:09:46.469 ]' 00:09:46.469 18:11:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:46.469 18:11:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:46.469 18:11:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.469 18:11:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:46.469 18:11:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:46.469 18:11:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:46.469 18:11:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.469 18:11:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:46.727 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:46.727 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:46.727 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:46.727 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.727 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.727 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:46.727 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:46.727 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.727 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.727 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:46.985 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:46.985 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:46.985 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:46.985 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.985 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.985 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:46.985 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:46.985 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.985 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.985 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:47.244 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:47.244 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:47.244 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:47.244 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.244 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.244 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:47.244 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.244 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.244 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.244 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:47.553 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:47.553 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:47.553 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:47.553 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.553 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.553 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:47.553 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.553 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.553 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.553 18:11:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:47.827 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:47.827 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:47.827 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:47.827 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.827 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.827 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:47.827 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:47.827 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.827 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:47.827 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:48.085 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:48.085 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:48.085 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:48.085 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:48.085 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:48.085 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:48.085 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:48.085 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:48.085 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:48.085 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:48.343 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:48.343 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:48.343 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:48.343 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:48.343 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:48.343 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:48.343 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:48.343 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:48.343 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:48.343 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.343 18:11:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:48.601 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:48.601 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:48.601 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:48.859 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:49.117 /dev/nbd0 00:09:49.117 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:49.117 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:49.117 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:49.117 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:49.117 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:49.117 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:49.118 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:49.118 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:49.118 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:49.118 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:49.118 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.118 1+0 records in 00:09:49.118 1+0 records out 00:09:49.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000823188 s, 5.0 MB/s 00:09:49.118 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.118 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:49.118 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.118 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:49.118 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:49.118 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:49.118 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:49.118 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:09:49.376 /dev/nbd1 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.376 1+0 records in 00:09:49.376 1+0 records out 00:09:49.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070043 s, 5.8 MB/s 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:49.376 18:11:23 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:09:49.634 /dev/nbd10 00:09:49.634 18:11:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:49.634 18:11:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:49.634 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:09:49.634 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:49.634 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:49.634 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:49.634 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:09:49.634 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:49.634 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:49.634 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:49.634 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.634 1+0 records in 00:09:49.634 1+0 records out 00:09:49.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000673814 s, 6.1 MB/s 00:09:49.635 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.893 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:49.893 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.893 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:49.893 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:49.893 18:11:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:49.893 18:11:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:49.893 18:11:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:50.151 /dev/nbd11 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:50.151 1+0 records in 00:09:50.151 1+0 records out 00:09:50.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569487 s, 7.2 MB/s 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:50.151 18:11:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:50.411 /dev/nbd12 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:50.411 1+0 records in 00:09:50.411 1+0 records out 00:09:50.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010077 s, 4.1 MB/s 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:50.411 18:11:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:50.670 /dev/nbd13 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:50.670 1+0 records in 00:09:50.670 1+0 records out 00:09:50.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000781412 s, 5.2 MB/s 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:50.670 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:51.237 /dev/nbd14 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:51.237 1+0 records in 00:09:51.237 1+0 records out 00:09:51.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000655478 s, 6.2 MB/s 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.237 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:51.495 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:51.495 { 00:09:51.495 "nbd_device": "/dev/nbd0", 00:09:51.495 "bdev_name": "Nvme0n1" 00:09:51.495 }, 00:09:51.495 { 00:09:51.495 "nbd_device": "/dev/nbd1", 00:09:51.495 "bdev_name": "Nvme1n1p1" 00:09:51.495 }, 00:09:51.495 { 00:09:51.495 "nbd_device": "/dev/nbd10", 00:09:51.495 "bdev_name": "Nvme1n1p2" 00:09:51.495 }, 00:09:51.495 { 00:09:51.495 "nbd_device": "/dev/nbd11", 00:09:51.495 "bdev_name": "Nvme2n1" 00:09:51.495 }, 00:09:51.495 { 00:09:51.495 "nbd_device": "/dev/nbd12", 00:09:51.495 "bdev_name": "Nvme2n2" 00:09:51.495 }, 00:09:51.495 { 00:09:51.495 "nbd_device": "/dev/nbd13", 00:09:51.495 "bdev_name": "Nvme2n3" 00:09:51.495 }, 00:09:51.495 { 00:09:51.495 "nbd_device": "/dev/nbd14", 00:09:51.495 "bdev_name": "Nvme3n1" 00:09:51.495 } 00:09:51.495 ]' 00:09:51.495 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:51.495 { 00:09:51.495 "nbd_device": "/dev/nbd0", 00:09:51.495 "bdev_name": "Nvme0n1" 00:09:51.495 }, 00:09:51.495 { 00:09:51.495 "nbd_device": "/dev/nbd1", 00:09:51.495 "bdev_name": "Nvme1n1p1" 00:09:51.495 }, 00:09:51.495 { 00:09:51.495 "nbd_device": "/dev/nbd10", 00:09:51.495 "bdev_name": "Nvme1n1p2" 00:09:51.495 }, 00:09:51.495 { 00:09:51.495 "nbd_device": "/dev/nbd11", 00:09:51.495 "bdev_name": "Nvme2n1" 00:09:51.495 }, 00:09:51.495 { 00:09:51.495 "nbd_device": "/dev/nbd12", 00:09:51.495 "bdev_name": "Nvme2n2" 00:09:51.495 }, 00:09:51.495 { 00:09:51.495 "nbd_device": "/dev/nbd13", 00:09:51.495 "bdev_name": "Nvme2n3" 00:09:51.495 }, 00:09:51.495 { 00:09:51.495 "nbd_device": "/dev/nbd14", 00:09:51.495 "bdev_name": "Nvme3n1" 00:09:51.495 } 00:09:51.495 ]' 00:09:51.495 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:51.495 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:51.495 /dev/nbd1 00:09:51.495 /dev/nbd10 00:09:51.495 /dev/nbd11 00:09:51.495 /dev/nbd12 00:09:51.495 /dev/nbd13 00:09:51.495 /dev/nbd14' 00:09:51.495 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:51.495 /dev/nbd1 00:09:51.495 /dev/nbd10 00:09:51.495 /dev/nbd11 00:09:51.495 /dev/nbd12 00:09:51.495 /dev/nbd13 00:09:51.495 /dev/nbd14' 00:09:51.496 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:51.496 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:51.496 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:51.496 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:51.496 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:51.496 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:51.496 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:51.496 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:51.496 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:51.496 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:51.496 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:51.496 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:51.496 256+0 records in 00:09:51.496 256+0 records out 00:09:51.496 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113436 s, 92.4 MB/s 00:09:51.496 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.496 18:11:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:51.754 256+0 records in 00:09:51.754 256+0 records out 00:09:51.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.158391 s, 6.6 MB/s 00:09:51.754 18:11:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.754 18:11:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:51.754 256+0 records in 00:09:51.754 256+0 records out 00:09:51.754 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.183263 s, 5.7 MB/s 00:09:51.754 18:11:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.754 18:11:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:52.012 256+0 records in 00:09:52.012 256+0 records out 00:09:52.012 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175682 s, 6.0 MB/s 00:09:52.012 18:11:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:52.012 18:11:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:52.270 256+0 records in 00:09:52.270 256+0 records out 00:09:52.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159007 s, 6.6 MB/s 00:09:52.270 18:11:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:52.270 18:11:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:52.270 256+0 records in 00:09:52.270 256+0 records out 00:09:52.270 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167696 s, 6.3 MB/s 00:09:52.270 18:11:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:52.270 18:11:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:52.528 256+0 records in 00:09:52.528 256+0 records out 00:09:52.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18029 s, 5.8 MB/s 00:09:52.528 18:11:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:52.528 18:11:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:52.786 256+0 records in 00:09:52.786 256+0 records out 00:09:52.786 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.223577 s, 4.7 MB/s 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:52.786 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.787 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:53.045 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:53.045 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:53.045 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:53.045 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.045 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.045 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:53.045 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:53.045 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.045 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.045 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:53.610 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:53.610 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:53.610 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:53.610 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.610 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.610 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:53.610 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:53.610 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.610 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.610 18:11:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:53.868 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:53.868 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:53.869 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:53.869 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.869 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.869 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:53.869 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:53.869 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.869 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.869 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:54.127 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:54.127 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:54.127 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:54.127 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.127 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.127 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:54.127 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:54.127 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.127 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.127 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:54.384 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:54.384 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:54.384 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:54.384 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.384 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.384 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:54.384 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:54.384 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.384 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.384 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:54.641 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:54.641 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:54.641 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:54.641 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.641 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.641 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:54.641 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:54.641 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.641 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.641 18:11:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:54.897 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:54.897 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:54.897 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:54.897 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.897 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.897 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:54.897 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:54.897 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.897 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:54.897 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.897 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:55.156 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:55.156 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:55.156 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:55.156 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:55.156 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:55.156 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:55.156 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:55.156 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:55.156 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:55.156 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:55.156 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:55.156 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:55.156 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:55.156 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.156 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:55.156 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:55.414 malloc_lvol_verify 00:09:55.414 18:11:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:55.980 305e1b53-0dcb-458b-8af7-1f76e82225e6 00:09:55.980 18:11:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:56.238 d9dadf2d-4ace-4471-af82-4fb91851e13b 00:09:56.238 18:11:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:56.496 /dev/nbd0 00:09:56.496 18:11:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:56.496 18:11:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:56.496 18:11:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:56.496 18:11:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:56.496 18:11:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:56.496 mke2fs 1.47.0 (5-Feb-2023) 00:09:56.496 Discarding device blocks: 0/4096 done 00:09:56.496 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:56.496 00:09:56.496 Allocating group tables: 0/1 done 00:09:56.496 Writing inode tables: 0/1 done 00:09:56.496 Creating journal (1024 blocks): done 00:09:56.496 Writing superblocks and filesystem accounting information: 0/1 done 00:09:56.496 00:09:56.496 18:11:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:56.496 18:11:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:56.496 18:11:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:56.496 18:11:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:56.496 18:11:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:56.496 18:11:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.496 18:11:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62838 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62838 ']' 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62838 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62838 00:09:56.755 killing process with pid 62838 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62838' 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62838 00:09:56.755 18:11:31 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62838 00:09:58.213 18:11:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:58.213 00:09:58.213 real 0m15.448s 00:09:58.213 user 0m22.077s 00:09:58.213 sys 0m5.012s 00:09:58.213 18:11:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.213 ************************************ 00:09:58.213 18:11:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:58.213 END TEST bdev_nbd 00:09:58.213 ************************************ 00:09:58.213 18:11:32 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:09:58.213 18:11:32 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:09:58.213 18:11:32 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:09:58.213 skipping fio tests on NVMe due to multi-ns failures. 00:09:58.213 18:11:32 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:58.213 18:11:32 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:58.213 18:11:32 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:58.213 18:11:32 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:58.213 18:11:32 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.213 18:11:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:58.213 ************************************ 00:09:58.213 START TEST bdev_verify 00:09:58.213 ************************************ 00:09:58.213 18:11:32 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:58.213 [2024-11-26 18:11:32.406778] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:09:58.213 [2024-11-26 18:11:32.407627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63293 ] 00:09:58.213 [2024-11-26 18:11:32.600585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:58.486 [2024-11-26 18:11:32.769786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.486 [2024-11-26 18:11:32.769792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.423 Running I/O for 5 seconds... 00:10:01.733 19200.00 IOPS, 75.00 MiB/s [2024-11-26T18:11:37.130Z] 19232.00 IOPS, 75.12 MiB/s [2024-11-26T18:11:38.064Z] 18901.33 IOPS, 73.83 MiB/s [2024-11-26T18:11:38.995Z] 18944.00 IOPS, 74.00 MiB/s [2024-11-26T18:11:38.995Z] 18777.60 IOPS, 73.35 MiB/s 00:10:04.534 Latency(us) 00:10:04.534 [2024-11-26T18:11:38.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.534 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:04.534 Verification LBA range: start 0x0 length 0xbd0bd 00:10:04.534 Nvme0n1 : 5.10 1356.18 5.30 0.00 0.00 94172.32 16681.89 89605.59 00:10:04.534 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:04.534 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:04.534 Nvme0n1 : 5.08 1286.19 5.02 0.00 0.00 99270.93 23473.80 111053.73 00:10:04.534 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:04.534 Verification LBA range: start 0x0 length 0x4ff80 00:10:04.534 Nvme1n1p1 : 5.10 1355.07 5.29 0.00 0.00 94079.85 18588.39 81979.58 00:10:04.534 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:04.534 Verification LBA range: start 0x4ff80 length 0x4ff80 00:10:04.534 Nvme1n1p1 : 5.08 1285.68 5.02 0.00 0.00 99046.38 25022.84 107240.73 00:10:04.534 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:04.534 Verification LBA range: start 0x0 length 0x4ff7f 00:10:04.534 Nvme1n1p2 : 5.10 1354.54 5.29 0.00 0.00 93892.05 18588.39 77689.95 00:10:04.534 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:04.534 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:10:04.534 Nvme1n1p2 : 5.08 1285.12 5.02 0.00 0.00 98892.63 22520.55 102474.47 00:10:04.534 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:04.534 Verification LBA range: start 0x0 length 0x80000 00:10:04.534 Nvme2n1 : 5.10 1354.06 5.29 0.00 0.00 93707.56 18826.71 73876.95 00:10:04.535 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:04.535 Verification LBA range: start 0x80000 length 0x80000 00:10:04.535 Nvme2n1 : 5.08 1284.61 5.02 0.00 0.00 98735.83 21567.30 97231.59 00:10:04.535 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:04.535 Verification LBA range: start 0x0 length 0x80000 00:10:04.535 Nvme2n2 : 5.11 1353.57 5.29 0.00 0.00 93493.93 19065.02 76736.70 00:10:04.535 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:04.535 Verification LBA range: start 0x80000 length 0x80000 00:10:04.535 Nvme2n2 : 5.08 1284.11 5.02 0.00 0.00 98595.96 21090.68 99614.72 00:10:04.535 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:04.535 Verification LBA range: start 0x0 length 0x80000 00:10:04.535 Nvme2n3 : 5.11 1353.10 5.29 0.00 0.00 93304.76 17158.52 80073.08 00:10:04.535 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:04.535 Verification LBA range: start 0x80000 length 0x80000 00:10:04.535 Nvme2n3 : 5.09 1283.60 5.01 0.00 0.00 98445.78 20852.36 103904.35 00:10:04.535 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:04.535 Verification LBA range: start 0x0 length 0x20000 00:10:04.535 Nvme3n1 : 5.11 1352.62 5.28 0.00 0.00 93143.10 12094.37 83409.45 00:10:04.535 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:04.535 Verification LBA range: start 0x20000 length 0x20000 00:10:04.535 Nvme3n1 : 5.09 1283.09 5.01 0.00 0.00 98286.39 14417.92 109147.23 00:10:04.535 [2024-11-26T18:11:38.996Z] =================================================================================================================== 00:10:04.535 [2024-11-26T18:11:38.996Z] Total : 18471.55 72.15 0.00 0.00 96146.70 12094.37 111053.73 00:10:05.908 00:10:05.908 real 0m7.737s 00:10:05.908 user 0m14.179s 00:10:05.908 sys 0m0.332s 00:10:05.908 18:11:40 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.908 ************************************ 00:10:05.908 END TEST bdev_verify 00:10:05.908 ************************************ 00:10:05.908 18:11:40 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:05.908 18:11:40 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:05.908 18:11:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:05.908 18:11:40 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.908 18:11:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:05.908 ************************************ 00:10:05.908 START TEST bdev_verify_big_io 00:10:05.908 ************************************ 00:10:05.908 18:11:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:05.908 [2024-11-26 18:11:40.186530] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:10:05.908 [2024-11-26 18:11:40.186782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63402 ] 00:10:06.166 [2024-11-26 18:11:40.373706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:06.166 [2024-11-26 18:11:40.509963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.166 [2024-11-26 18:11:40.509969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.101 Running I/O for 5 seconds... 00:10:12.964 1555.00 IOPS, 97.19 MiB/s [2024-11-26T18:11:47.718Z] 3278.50 IOPS, 204.91 MiB/s 00:10:13.257 Latency(us) 00:10:13.258 [2024-11-26T18:11:47.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.258 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:13.258 Verification LBA range: start 0x0 length 0xbd0b 00:10:13.258 Nvme0n1 : 5.85 108.56 6.79 0.00 0.00 1109800.79 34078.72 1166779.11 00:10:13.258 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:13.258 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:13.258 Nvme0n1 : 5.80 107.47 6.72 0.00 0.00 1149742.24 17277.67 1387933.32 00:10:13.258 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:13.258 Verification LBA range: start 0x0 length 0x4ff8 00:10:13.258 Nvme1n1p1 : 5.91 111.23 6.95 0.00 0.00 1072827.31 107717.35 1220161.16 00:10:13.258 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:13.258 Verification LBA range: start 0x4ff8 length 0x4ff8 00:10:13.258 Nvme1n1p1 : 5.80 107.41 6.71 0.00 0.00 1117313.54 38368.35 1616713.54 00:10:13.258 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:13.258 Verification LBA range: start 0x0 length 0x4ff7 00:10:13.258 Nvme1n1p2 : 5.91 112.21 7.01 0.00 0.00 1040016.87 129642.12 1143901.09 00:10:13.258 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:13.258 Verification LBA range: start 0x4ff7 length 0x4ff7 00:10:13.258 Nvme1n1p2 : 5.80 107.18 6.70 0.00 0.00 1086835.92 58624.93 1631965.56 00:10:13.258 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:13.258 Verification LBA range: start 0x0 length 0x8000 00:10:13.258 Nvme2n1 : 5.91 116.55 7.28 0.00 0.00 980520.42 54811.93 1143901.09 00:10:13.258 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:13.258 Verification LBA range: start 0x8000 length 0x8000 00:10:13.258 Nvme2n1 : 5.88 111.32 6.96 0.00 0.00 1019566.71 67680.81 1654843.58 00:10:13.258 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:13.258 Verification LBA range: start 0x0 length 0x8000 00:10:13.258 Nvme2n2 : 6.00 122.85 7.68 0.00 0.00 902709.21 32172.22 1060015.01 00:10:13.258 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:13.258 Verification LBA range: start 0x8000 length 0x8000 00:10:13.258 Nvme2n2 : 6.00 114.90 7.18 0.00 0.00 952467.49 67204.19 1677721.60 00:10:13.258 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:13.258 Verification LBA range: start 0x0 length 0x8000 00:10:13.258 Nvme2n3 : 6.00 127.95 8.00 0.00 0.00 849300.64 51952.17 1075267.03 00:10:13.258 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:13.258 Verification LBA range: start 0x8000 length 0x8000 00:10:13.258 Nvme2n3 : 6.01 124.87 7.80 0.00 0.00 861227.69 13941.29 1700599.62 00:10:13.258 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:13.258 Verification LBA range: start 0x0 length 0x2000 00:10:13.258 Nvme3n1 : 6.01 138.38 8.65 0.00 0.00 766515.44 3425.75 1082893.03 00:10:13.258 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:13.258 Verification LBA range: start 0x2000 length 0x2000 00:10:13.258 Nvme3n1 : 6.06 145.23 9.08 0.00 0.00 721953.27 852.71 1578583.51 00:10:13.258 [2024-11-26T18:11:47.719Z] =================================================================================================================== 00:10:13.258 [2024-11-26T18:11:47.719Z] Total : 1656.11 103.51 0.00 0.00 959331.82 852.71 1700599.62 00:10:15.154 00:10:15.154 real 0m9.120s 00:10:15.154 user 0m16.931s 00:10:15.154 sys 0m0.381s 00:10:15.154 18:11:49 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.154 ************************************ 00:10:15.154 END TEST bdev_verify_big_io 00:10:15.154 ************************************ 00:10:15.154 18:11:49 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:15.154 18:11:49 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:15.154 18:11:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:15.154 18:11:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.154 18:11:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:15.154 ************************************ 00:10:15.154 START TEST bdev_write_zeroes 00:10:15.154 ************************************ 00:10:15.154 18:11:49 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:15.154 [2024-11-26 18:11:49.362059] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:10:15.154 [2024-11-26 18:11:49.362260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63518 ] 00:10:15.154 [2024-11-26 18:11:49.540610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.412 [2024-11-26 18:11:49.669428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.977 Running I/O for 1 seconds... 00:10:17.348 57280.00 IOPS, 223.75 MiB/s 00:10:17.348 Latency(us) 00:10:17.348 [2024-11-26T18:11:51.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.348 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:17.348 Nvme0n1 : 1.03 8157.40 31.86 0.00 0.00 15653.73 8936.73 37176.79 00:10:17.348 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:17.348 Nvme1n1p1 : 1.03 8147.02 31.82 0.00 0.00 15646.68 13822.14 29908.25 00:10:17.348 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:17.348 Nvme1n1p2 : 1.03 8136.67 31.78 0.00 0.00 15596.44 13524.25 26571.87 00:10:17.348 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:17.348 Nvme2n1 : 1.03 8126.62 31.74 0.00 0.00 15496.74 10187.87 25380.31 00:10:17.348 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:17.348 Nvme2n2 : 1.03 8117.05 31.71 0.00 0.00 15485.87 10009.13 24784.52 00:10:17.348 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:17.348 Nvme2n3 : 1.03 8107.83 31.67 0.00 0.00 15472.63 9413.35 26095.24 00:10:17.348 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:17.348 Nvme3n1 : 1.04 8036.41 31.39 0.00 0.00 15581.18 11081.54 27882.59 00:10:17.348 [2024-11-26T18:11:51.809Z] =================================================================================================================== 00:10:17.348 [2024-11-26T18:11:51.809Z] Total : 56829.00 221.99 0.00 0.00 15561.88 8936.73 37176.79 00:10:18.287 00:10:18.287 real 0m3.303s 00:10:18.287 user 0m2.878s 00:10:18.287 sys 0m0.304s 00:10:18.287 18:11:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.287 18:11:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:18.287 ************************************ 00:10:18.287 END TEST bdev_write_zeroes 00:10:18.287 ************************************ 00:10:18.287 18:11:52 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:18.287 18:11:52 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:18.287 18:11:52 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.287 18:11:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:18.287 ************************************ 00:10:18.287 START TEST bdev_json_nonenclosed 00:10:18.287 ************************************ 00:10:18.287 18:11:52 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:18.287 [2024-11-26 18:11:52.715456] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:10:18.287 [2024-11-26 18:11:52.715679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63571 ] 00:10:18.546 [2024-11-26 18:11:52.904386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.805 [2024-11-26 18:11:53.038897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.805 [2024-11-26 18:11:53.039020] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:18.805 [2024-11-26 18:11:53.039049] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:18.805 [2024-11-26 18:11:53.039063] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:19.062 00:10:19.062 real 0m0.706s 00:10:19.062 user 0m0.461s 00:10:19.062 sys 0m0.141s 00:10:19.062 18:11:53 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.062 18:11:53 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:19.062 ************************************ 00:10:19.062 END TEST bdev_json_nonenclosed 00:10:19.063 ************************************ 00:10:19.063 18:11:53 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:19.063 18:11:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:19.063 18:11:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.063 18:11:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:19.063 ************************************ 00:10:19.063 START TEST bdev_json_nonarray 00:10:19.063 ************************************ 00:10:19.063 18:11:53 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:19.063 [2024-11-26 18:11:53.491989] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:10:19.063 [2024-11-26 18:11:53.492195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63602 ] 00:10:19.321 [2024-11-26 18:11:53.673168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.579 [2024-11-26 18:11:53.804450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.579 [2024-11-26 18:11:53.804598] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:19.579 [2024-11-26 18:11:53.804640] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:19.579 [2024-11-26 18:11:53.804656] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:19.837 00:10:19.837 real 0m0.720s 00:10:19.837 user 0m0.450s 00:10:19.837 sys 0m0.164s 00:10:19.837 18:11:54 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.837 18:11:54 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:19.837 ************************************ 00:10:19.837 END TEST bdev_json_nonarray 00:10:19.837 ************************************ 00:10:19.837 18:11:54 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:10:19.837 18:11:54 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:10:19.837 18:11:54 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:19.837 18:11:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:19.837 18:11:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.837 18:11:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:19.837 ************************************ 00:10:19.837 START TEST bdev_gpt_uuid 00:10:19.838 ************************************ 00:10:19.838 18:11:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:10:19.838 18:11:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:10:19.838 18:11:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:10:19.838 18:11:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63627 00:10:19.838 18:11:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:19.838 18:11:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63627 00:10:19.838 18:11:54 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:19.838 18:11:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63627 ']' 00:10:19.838 18:11:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.838 18:11:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.838 18:11:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.838 18:11:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.838 18:11:54 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:19.838 [2024-11-26 18:11:54.264172] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:10:19.838 [2024-11-26 18:11:54.264364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63627 ] 00:10:20.096 [2024-11-26 18:11:54.449707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.352 [2024-11-26 18:11:54.582305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.283 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.283 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:10:21.283 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:21.283 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.283 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:21.541 Some configs were skipped because the RPC state that can call them passed over. 00:10:21.541 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.541 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:10:21.541 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.541 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:21.541 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.541 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:21.541 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.541 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:21.541 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.541 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:10:21.541 { 00:10:21.541 "name": "Nvme1n1p1", 00:10:21.541 "aliases": [ 00:10:21.541 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:21.541 ], 00:10:21.541 "product_name": "GPT Disk", 00:10:21.541 "block_size": 4096, 00:10:21.541 "num_blocks": 655104, 00:10:21.541 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:21.541 "assigned_rate_limits": { 00:10:21.541 "rw_ios_per_sec": 0, 00:10:21.541 "rw_mbytes_per_sec": 0, 00:10:21.541 "r_mbytes_per_sec": 0, 00:10:21.541 "w_mbytes_per_sec": 0 00:10:21.541 }, 00:10:21.541 "claimed": false, 00:10:21.541 "zoned": false, 00:10:21.541 "supported_io_types": { 00:10:21.541 "read": true, 00:10:21.541 "write": true, 00:10:21.541 "unmap": true, 00:10:21.541 "flush": true, 00:10:21.541 "reset": true, 00:10:21.541 "nvme_admin": false, 00:10:21.541 "nvme_io": false, 00:10:21.541 "nvme_io_md": false, 00:10:21.541 "write_zeroes": true, 00:10:21.541 "zcopy": false, 00:10:21.541 "get_zone_info": false, 00:10:21.541 "zone_management": false, 00:10:21.542 "zone_append": false, 00:10:21.542 "compare": true, 00:10:21.542 "compare_and_write": false, 00:10:21.542 "abort": true, 00:10:21.542 "seek_hole": false, 00:10:21.542 "seek_data": false, 00:10:21.542 "copy": true, 00:10:21.542 "nvme_iov_md": false 00:10:21.542 }, 00:10:21.542 "driver_specific": { 00:10:21.542 "gpt": { 00:10:21.542 "base_bdev": "Nvme1n1", 00:10:21.542 "offset_blocks": 256, 00:10:21.542 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:21.542 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:21.542 "partition_name": "SPDK_TEST_first" 00:10:21.542 } 00:10:21.542 } 00:10:21.542 } 00:10:21.542 ]' 00:10:21.542 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:10:21.542 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:10:21.542 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:10:21.542 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:21.542 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:21.542 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:21.542 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:21.542 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.542 18:11:55 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:10:21.800 { 00:10:21.800 "name": "Nvme1n1p2", 00:10:21.800 "aliases": [ 00:10:21.800 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:21.800 ], 00:10:21.800 "product_name": "GPT Disk", 00:10:21.800 "block_size": 4096, 00:10:21.800 "num_blocks": 655103, 00:10:21.800 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:21.800 "assigned_rate_limits": { 00:10:21.800 "rw_ios_per_sec": 0, 00:10:21.800 "rw_mbytes_per_sec": 0, 00:10:21.800 "r_mbytes_per_sec": 0, 00:10:21.800 "w_mbytes_per_sec": 0 00:10:21.800 }, 00:10:21.800 "claimed": false, 00:10:21.800 "zoned": false, 00:10:21.800 "supported_io_types": { 00:10:21.800 "read": true, 00:10:21.800 "write": true, 00:10:21.800 "unmap": true, 00:10:21.800 "flush": true, 00:10:21.800 "reset": true, 00:10:21.800 "nvme_admin": false, 00:10:21.800 "nvme_io": false, 00:10:21.800 "nvme_io_md": false, 00:10:21.800 "write_zeroes": true, 00:10:21.800 "zcopy": false, 00:10:21.800 "get_zone_info": false, 00:10:21.800 "zone_management": false, 00:10:21.800 "zone_append": false, 00:10:21.800 "compare": true, 00:10:21.800 "compare_and_write": false, 00:10:21.800 "abort": true, 00:10:21.800 "seek_hole": false, 00:10:21.800 "seek_data": false, 00:10:21.800 "copy": true, 00:10:21.800 "nvme_iov_md": false 00:10:21.800 }, 00:10:21.800 "driver_specific": { 00:10:21.800 "gpt": { 00:10:21.800 "base_bdev": "Nvme1n1", 00:10:21.800 "offset_blocks": 655360, 00:10:21.800 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:21.800 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:21.800 "partition_name": "SPDK_TEST_second" 00:10:21.800 } 00:10:21.800 } 00:10:21.800 } 00:10:21.800 ]' 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63627 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63627 ']' 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63627 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63627 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.800 killing process with pid 63627 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63627' 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63627 00:10:21.800 18:11:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63627 00:10:24.359 00:10:24.359 real 0m4.312s 00:10:24.359 user 0m4.509s 00:10:24.359 sys 0m0.587s 00:10:24.359 18:11:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.359 18:11:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:10:24.359 ************************************ 00:10:24.359 END TEST bdev_gpt_uuid 00:10:24.359 ************************************ 00:10:24.359 18:11:58 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:10:24.359 18:11:58 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:10:24.359 18:11:58 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:10:24.359 18:11:58 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:24.359 18:11:58 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:24.359 18:11:58 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:10:24.359 18:11:58 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:10:24.359 18:11:58 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:10:24.359 18:11:58 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:24.616 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:24.616 Waiting for block devices as requested 00:10:24.616 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.875 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.875 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.875 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:30.141 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:30.141 18:12:04 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:10:30.141 18:12:04 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:10:30.400 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:30.400 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:10:30.400 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:30.400 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:10:30.400 18:12:04 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:10:30.400 00:10:30.400 real 1m6.742s 00:10:30.400 user 1m25.729s 00:10:30.400 sys 0m10.881s 00:10:30.400 18:12:04 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.400 ************************************ 00:10:30.400 END TEST blockdev_nvme_gpt 00:10:30.400 ************************************ 00:10:30.400 18:12:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:30.400 18:12:04 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:30.400 18:12:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:30.400 18:12:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.400 18:12:04 -- common/autotest_common.sh@10 -- # set +x 00:10:30.400 ************************************ 00:10:30.400 START TEST nvme 00:10:30.400 ************************************ 00:10:30.400 18:12:04 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:30.400 * Looking for test storage... 00:10:30.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:30.400 18:12:04 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:30.400 18:12:04 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:10:30.400 18:12:04 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:30.658 18:12:04 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:30.658 18:12:04 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.658 18:12:04 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.658 18:12:04 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.658 18:12:04 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.658 18:12:04 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.658 18:12:04 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.658 18:12:04 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.658 18:12:04 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.658 18:12:04 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.658 18:12:04 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.658 18:12:04 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.658 18:12:04 nvme -- scripts/common.sh@344 -- # case "$op" in 00:10:30.658 18:12:04 nvme -- scripts/common.sh@345 -- # : 1 00:10:30.658 18:12:04 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.658 18:12:04 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.658 18:12:04 nvme -- scripts/common.sh@365 -- # decimal 1 00:10:30.658 18:12:04 nvme -- scripts/common.sh@353 -- # local d=1 00:10:30.658 18:12:04 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.658 18:12:04 nvme -- scripts/common.sh@355 -- # echo 1 00:10:30.658 18:12:04 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.658 18:12:04 nvme -- scripts/common.sh@366 -- # decimal 2 00:10:30.658 18:12:04 nvme -- scripts/common.sh@353 -- # local d=2 00:10:30.658 18:12:04 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.658 18:12:04 nvme -- scripts/common.sh@355 -- # echo 2 00:10:30.658 18:12:04 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.658 18:12:04 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.658 18:12:04 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.658 18:12:04 nvme -- scripts/common.sh@368 -- # return 0 00:10:30.658 18:12:04 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.658 18:12:04 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:30.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.658 --rc genhtml_branch_coverage=1 00:10:30.658 --rc genhtml_function_coverage=1 00:10:30.658 --rc genhtml_legend=1 00:10:30.658 --rc geninfo_all_blocks=1 00:10:30.658 --rc geninfo_unexecuted_blocks=1 00:10:30.658 00:10:30.658 ' 00:10:30.658 18:12:04 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:30.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.658 --rc genhtml_branch_coverage=1 00:10:30.658 --rc genhtml_function_coverage=1 00:10:30.658 --rc genhtml_legend=1 00:10:30.658 --rc geninfo_all_blocks=1 00:10:30.658 --rc geninfo_unexecuted_blocks=1 00:10:30.658 00:10:30.658 ' 00:10:30.658 18:12:04 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:30.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.658 --rc genhtml_branch_coverage=1 00:10:30.658 --rc genhtml_function_coverage=1 00:10:30.658 --rc genhtml_legend=1 00:10:30.658 --rc geninfo_all_blocks=1 00:10:30.658 --rc geninfo_unexecuted_blocks=1 00:10:30.658 00:10:30.658 ' 00:10:30.658 18:12:04 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:30.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.658 --rc genhtml_branch_coverage=1 00:10:30.658 --rc genhtml_function_coverage=1 00:10:30.658 --rc genhtml_legend=1 00:10:30.658 --rc geninfo_all_blocks=1 00:10:30.658 --rc geninfo_unexecuted_blocks=1 00:10:30.658 00:10:30.658 ' 00:10:30.658 18:12:04 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:30.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:31.481 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:31.481 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:31.481 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:31.740 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:31.740 18:12:06 nvme -- nvme/nvme.sh@79 -- # uname 00:10:31.740 18:12:06 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:31.740 18:12:06 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:31.740 18:12:06 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:31.740 18:12:06 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:31.740 18:12:06 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:10:31.740 18:12:06 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:10:31.740 18:12:06 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:31.740 18:12:06 nvme -- common/autotest_common.sh@1075 -- # stubpid=64282 00:10:31.740 18:12:06 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:10:31.740 Waiting for stub to ready for secondary processes... 00:10:31.740 18:12:06 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:31.740 18:12:06 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64282 ]] 00:10:31.740 18:12:06 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:10:31.740 [2024-11-26 18:12:06.121007] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:10:31.740 [2024-11-26 18:12:06.122292] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:10:32.675 18:12:07 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:32.675 18:12:07 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64282 ]] 00:10:32.675 18:12:07 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:10:33.240 [2024-11-26 18:12:07.517038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:33.240 [2024-11-26 18:12:07.663280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.240 [2024-11-26 18:12:07.663382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.241 [2024-11-26 18:12:07.663388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.241 [2024-11-26 18:12:07.685636] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:10:33.241 [2024-11-26 18:12:07.685882] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:33.241 [2024-11-26 18:12:07.696270] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:33.241 [2024-11-26 18:12:07.696507] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:33.241 [2024-11-26 18:12:07.698709] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:33.241 [2024-11-26 18:12:07.699060] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:33.241 [2024-11-26 18:12:07.699288] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:33.498 [2024-11-26 18:12:07.701597] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:33.498 [2024-11-26 18:12:07.701854] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:33.498 [2024-11-26 18:12:07.702087] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:33.498 [2024-11-26 18:12:07.704458] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:33.498 [2024-11-26 18:12:07.704737] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:33.498 [2024-11-26 18:12:07.704963] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:33.498 [2024-11-26 18:12:07.705206] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:33.498 [2024-11-26 18:12:07.705397] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:33.758 18:12:08 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:33.758 18:12:08 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:10:33.758 done. 00:10:33.758 18:12:08 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:33.758 18:12:08 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:10:33.758 18:12:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.758 18:12:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:33.758 ************************************ 00:10:33.758 START TEST nvme_reset 00:10:33.758 ************************************ 00:10:33.758 18:12:08 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:34.019 Initializing NVMe Controllers 00:10:34.019 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:34.019 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:34.019 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:34.019 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:34.019 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:34.019 ************************************ 00:10:34.019 END TEST nvme_reset 00:10:34.019 ************************************ 00:10:34.019 00:10:34.019 real 0m0.342s 00:10:34.019 user 0m0.124s 00:10:34.019 sys 0m0.178s 00:10:34.019 18:12:08 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.019 18:12:08 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:10:34.019 18:12:08 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:34.019 18:12:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:34.019 18:12:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.019 18:12:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:34.019 ************************************ 00:10:34.019 START TEST nvme_identify 00:10:34.019 ************************************ 00:10:34.019 18:12:08 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:10:34.019 18:12:08 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:10:34.019 18:12:08 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:34.019 18:12:08 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:34.019 18:12:08 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:34.019 18:12:08 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:34.019 18:12:08 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:10:34.019 18:12:08 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:34.019 18:12:08 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:34.019 18:12:08 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:34.277 18:12:08 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:34.277 18:12:08 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:34.277 18:12:08 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:34.539 [2024-11-26 18:12:08.810683] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64315 termina===================================================== 00:10:34.539 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:34.539 ===================================================== 00:10:34.539 Controller Capabilities/Features 00:10:34.539 ================================ 00:10:34.539 Vendor ID: 1b36 00:10:34.539 Subsystem Vendor ID: 1af4 00:10:34.539 Serial Number: 12340 00:10:34.539 Model Number: QEMU NVMe Ctrl 00:10:34.539 Firmware Version: 8.0.0 00:10:34.539 Recommended Arb Burst: 6 00:10:34.539 IEEE OUI Identifier: 00 54 52 00:10:34.539 Multi-path I/O 00:10:34.539 May have multiple subsystem ports: No 00:10:34.539 May have multiple controllers: No 00:10:34.539 Associated with SR-IOV VF: No 00:10:34.539 Max Data Transfer Size: 524288 00:10:34.539 Max Number of Namespaces: 256 00:10:34.539 Max Number of I/O Queues: 64 00:10:34.539 NVMe Specification Version (VS): 1.4 00:10:34.539 NVMe Specification Version (Identify): 1.4 00:10:34.539 Maximum Queue Entries: 2048 00:10:34.539 Contiguous Queues Required: Yes 00:10:34.539 Arbitration Mechanisms Supported 00:10:34.539 Weighted Round Robin: Not Supported 00:10:34.539 Vendor Specific: Not Supported 00:10:34.539 Reset Timeout: 7500 ms 00:10:34.539 Doorbell Stride: 4 bytes 00:10:34.539 NVM Subsystem Reset: Not Supported 00:10:34.539 Command Sets Supported 00:10:34.539 NVM Command Set: Supported 00:10:34.539 Boot Partition: Not Supported 00:10:34.539 Memory Page Size Minimum: 4096 bytes 00:10:34.539 Memory Page Size Maximum: 65536 bytes 00:10:34.539 Persistent Memory Region: Not Supported 00:10:34.539 Optional Asynchronous Events Supported 00:10:34.539 Namespace Attribute Notices: Supported 00:10:34.539 Firmware Activation Notices: Not Supported 00:10:34.539 ANA Change Notices: Not Supported 00:10:34.539 PLE Aggregate Log Change Notices: Not Supported 00:10:34.539 LBA Status Info Alert Notices: Not Supported 00:10:34.539 EGE Aggregate Log Change Notices: Not Supported 00:10:34.539 Normal NVM Subsystem Shutdown event: Not Supported 00:10:34.539 Zone Descriptor Change Notices: Not Supported 00:10:34.539 Discovery Log Change Notices: Not Supported 00:10:34.539 Controller Attributes 00:10:34.539 128-bit Host Identifier: Not Supported 00:10:34.539 Non-Operational Permissive Mode: Not Supported 00:10:34.539 NVM Sets: Not Supported 00:10:34.539 Read Recovery Levels: Not Supported 00:10:34.539 Endurance Groups: Not Supported 00:10:34.539 Predictable Latency Mode: Not Supported 00:10:34.539 Traffic Based Keep ALive: Not Supported 00:10:34.539 Namespace Granularity: Not Supported 00:10:34.539 SQ Associations: Not Supported 00:10:34.539 UUID List: Not Supported 00:10:34.539 Multi-Domain Subsystem: Not Supported 00:10:34.539 Fixed Capacity Management: Not Supported 00:10:34.539 Variable Capacity Management: Not Supported 00:10:34.539 Delete Endurance Group: Not Supported 00:10:34.539 Delete NVM Set: Not Supported 00:10:34.539 Extended LBA Formats Supported: Supported 00:10:34.539 Flexible Data Placement Supported: Not Supported 00:10:34.539 00:10:34.539 Controller Memory Buffer Support 00:10:34.539 ================================ 00:10:34.539 Supported: No 00:10:34.539 00:10:34.539 Persistent Memory Region Support 00:10:34.539 ================================ 00:10:34.539 Supported: No 00:10:34.539 00:10:34.539 Admin Command Set Attributes 00:10:34.539 ============================ 00:10:34.539 Security Send/Receive: Not Supported 00:10:34.539 Format NVM: Supported 00:10:34.539 Firmware Activate/Download: Not Supported 00:10:34.539 Namespace Management: Supported 00:10:34.539 Device Self-Test: Not Supported 00:10:34.539 Directives: Supported 00:10:34.539 NVMe-MI: Not Supported 00:10:34.539 Virtualization Management: Not Supported 00:10:34.539 Doorbell Buffer Config: Supported 00:10:34.539 Get LBA Status Capability: Not Supported 00:10:34.539 Command & Feature Lockdown Capability: Not Supported 00:10:34.539 Abort Command Limit: 4 00:10:34.539 Async Event Request Limit: 4 00:10:34.539 Number of Firmware Slots: N/A 00:10:34.539 Firmware Slot 1 Read-Only: N/A 00:10:34.539 Firmware Activation Without Reset: N/A 00:10:34.539 Multiple Update Detection Support: N/A 00:10:34.539 Firmware Update Granularity: No Information Provided 00:10:34.539 Per-Namespace SMART Log: Yes 00:10:34.539 Asymmetric Namespace Access Log Page: Not Supported 00:10:34.539 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:34.539 Command Effects Log Page: Supported 00:10:34.539 Get Log Page Extended Data: Supported 00:10:34.539 Telemetry Log Pages: Not Supported 00:10:34.539 Persistent Event Log Pages: Not Supported 00:10:34.539 Supported Log Pages Log Page: May Support 00:10:34.539 Commands Supported & Effects Log Page: Not Supported 00:10:34.539 Feature Identifiers & Effects Log Page:May Support 00:10:34.539 NVMe-MI Commands & Effects Log Page: May Support 00:10:34.539 Data Area 4 for Telemetry Log: Not Supported 00:10:34.539 Error Log Page Entries Supported: 1 00:10:34.539 Keep Alive: Not Supported 00:10:34.539 00:10:34.539 NVM Command Set Attributes 00:10:34.539 ========================== 00:10:34.539 Submission Queue Entry Size 00:10:34.539 Max: 64 00:10:34.539 Min: 64 00:10:34.539 Completion Queue Entry Size 00:10:34.539 Max: 16 00:10:34.539 Min: 16 00:10:34.539 Number of Namespaces: 256 00:10:34.539 Compare Command: Supported 00:10:34.539 Write Uncorrectable Command: Not Supported 00:10:34.539 Dataset Management Command: Supported 00:10:34.539 Write Zeroes Command: Supported 00:10:34.539 Set Features Save Field: Supported 00:10:34.539 Reservations: Not Supported 00:10:34.539 Timestamp: Supported 00:10:34.539 Copy: Supported 00:10:34.539 Volatile Write Cache: Present 00:10:34.539 Atomic Write Unit (Normal): 1 00:10:34.539 Atomic Write Unit (PFail): 1 00:10:34.539 Atomic Compare & Write Unit: 1 00:10:34.539 Fused Compare & Write: Not Supported 00:10:34.539 Scatter-Gather List 00:10:34.539 SGL Command Set: Supported 00:10:34.539 SGL Keyed: Not Supported 00:10:34.539 SGL Bit Bucket Descriptor: Not Supported 00:10:34.539 SGL Metadata Pointer: Not Supported 00:10:34.539 Oversized SGL: Not Supported 00:10:34.539 SGL Metadata Address: Not Supported 00:10:34.539 SGL Offset: Not Supported 00:10:34.539 Transport SGL Data Block: Not Supported 00:10:34.539 Replay Protected Memory Block: Not Supported 00:10:34.539 00:10:34.539 Firmware Slot Information 00:10:34.539 ========================= 00:10:34.539 Active slot: 1 00:10:34.539 Slot 1 Firmware Revision: 1.0 00:10:34.539 00:10:34.539 00:10:34.539 Commands Supported and Effects 00:10:34.539 ============================== 00:10:34.539 Admin Commands 00:10:34.539 -------------- 00:10:34.539 Delete I/O Submission Queue (00h): Supported 00:10:34.539 Create I/O Submission Queue (01h): Supported 00:10:34.539 Get Log Page (02h): Supported 00:10:34.539 Delete I/O Completion Queue (04h): Supported 00:10:34.539 Create I/O Completion Queue (05h): Supported 00:10:34.539 Identify (06h): Supported 00:10:34.539 Abort (08h): Supported 00:10:34.539 Set Features (09h): Supported 00:10:34.539 Get Features (0Ah): Supported 00:10:34.539 Asynchronous Event Request (0Ch): Supported 00:10:34.539 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:34.539 Directive Send (19h): Supported 00:10:34.539 Directive Receive (1Ah): Supported 00:10:34.539 Virtualization Management (1Ch): Supported 00:10:34.539 Doorbell Buffer Config (7Ch): Supported 00:10:34.539 Format NVM (80h): Supported LBA-Change 00:10:34.539 I/O Commands 00:10:34.539 ------------ 00:10:34.539 Flush (00h): Supported LBA-Change 00:10:34.539 Write (01h): Supported LBA-Change 00:10:34.539 Read (02h): Supported 00:10:34.539 Compare (05h): Supported 00:10:34.539 Write Zeroes (08h): Supported LBA-Change 00:10:34.539 Dataset Management (09h): Supported LBA-Change 00:10:34.539 Unknown (0Ch): Supported 00:10:34.539 Unknown (12h): Supported 00:10:34.539 Copy (19h): Supported LBA-Change 00:10:34.539 Unknown (1Dh): Supported LBA-Change 00:10:34.539 00:10:34.539 Error Log 00:10:34.539 ========= 00:10:34.539 00:10:34.539 Arbitration 00:10:34.539 =========== 00:10:34.539 Arbitration Burst: no limit 00:10:34.539 00:10:34.539 Power Management 00:10:34.539 ================ 00:10:34.539 Number of Power States: 1 00:10:34.539 Current Power State: Power State #0 00:10:34.539 Power State #0: 00:10:34.539 Max Power: 25.00 W 00:10:34.539 Non-Operational State: Operational 00:10:34.539 Entry Latency: 16 microseconds 00:10:34.539 Exit Latency: 4 microseconds 00:10:34.539 Relative Read Throughput: 0 00:10:34.539 Relative Read Latency: 0 00:10:34.539 Relative Write Throughput: 0 00:10:34.539 Relative Write Latency: 0 00:10:34.539 Idle Power: Not Reported 00:10:34.539 Active Power: Not Reported 00:10:34.539 Non-Operational Permissive Mode: Not Supported 00:10:34.539 00:10:34.539 Health Information 00:10:34.539 ================== 00:10:34.540 Critical Warnings: 00:10:34.540 Available Spare Space: OK 00:10:34.540 Temperature: OK 00:10:34.540 Device Reliability: OK 00:10:34.540 Read Only: No 00:10:34.540 Volatile Memory Backup: OK 00:10:34.540 Current Temperature: 323 Kelvin (50 Celsius) 00:10:34.540 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:34.540 Available Spare: 0% 00:10:34.540 Available Spare Threshold: 0% 00:10:34.540 Life Percentage Used: 0% 00:10:34.540 Data Units Read: 668 00:10:34.540 Data Units Written: 596 00:10:34.540 Host Read Commands: 32897 00:10:34.540 Host Write Commands: 32683 00:10:34.540 Controller Busy Time: 0 minutes 00:10:34.540 Power Cycles: 0 00:10:34.540 Power On Hours: 0 hours 00:10:34.540 Unsafe Shutdowns: 0 00:10:34.540 Unrecoverable Media Errors: 0 00:10:34.540 Lifetime Error Log Entries: 0 00:10:34.540 Warning Temperature Time: 0 minutes 00:10:34.540 Critical Temperature Time: 0 minutes 00:10:34.540 00:10:34.540 Number of Queues 00:10:34.540 ================ 00:10:34.540 Number of I/O Submission Queues: 64 00:10:34.540 Number of I/O Completion Queues: 64 00:10:34.540 00:10:34.540 ZNS Specific Controller Data 00:10:34.540 ============================ 00:10:34.540 Zone Append Size Limit: 0 00:10:34.540 00:10:34.540 00:10:34.540 Active Namespaces 00:10:34.540 ================= 00:10:34.540 Namespace ID:1 00:10:34.540 Error Recovery Timeout: Unlimited 00:10:34.540 Command Set Identifier: NVM (00h) 00:10:34.540 Deallocate: Supported 00:10:34.540 Deallocated/Unwritten Error: Supported 00:10:34.540 Deallocated Read Value: All 0x00 00:10:34.540 Deallocate in Write Zeroes: Not Supported 00:10:34.540 Deallocated Guard Field: 0xFFFF 00:10:34.540 Flush: Supported 00:10:34.540 Reservation: Not Supported 00:10:34.540 Metadata Transferred as: Separate Metadata Buffer 00:10:34.540 Namespace Sharing Capabilities: Private 00:10:34.540 Size (in LBAs): 1548666 (5GiB) 00:10:34.540 Capacity (in LBAs): 1548666 (5GiB) 00:10:34.540 Utilization (in LBAs): 1548666 (5GiB) 00:10:34.540 Thin Provisioning: Not Supported 00:10:34.540 Per-NS Atomic Units: No 00:10:34.540 Maximum Single Source Range Length: 128 00:10:34.540 Maximum Copy Length: 128 00:10:34.540 Maximum Source Range Count: 128 00:10:34.540 NGUID/EUI64 Never Reused: No 00:10:34.540 Namespace Write Protected: No 00:10:34.540 Number of LBA Formats: 8 00:10:34.540 Current LBA Format: LBA Format #07 00:10:34.540 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:34.540 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:34.540 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:34.540 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:34.540 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:34.540 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:34.540 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:34.540 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:34.540 00:10:34.540 NVM Specific Namespace Data 00:10:34.540 =========================== 00:10:34.540 Logical Block Storage Tag Mask: 0 00:10:34.540 Protection Information Capabilities: 00:10:34.540 16b Guard Protection Information Storage Tag Support: No 00:10:34.540 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:34.540 Storage Tag Check Read Support: No 00:10:34.540 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.540 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.540 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.540 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.540 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.540 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.540 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.540 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.540 ===================================================== 00:10:34.540 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:34.540 ===================================================== 00:10:34.540 Controller Capabilities/Features 00:10:34.540 ================================ 00:10:34.540 Vendor ID: 1b36 00:10:34.540 Subsystem Vendor ID: 1af4 00:10:34.540 Serial Number: 12341 00:10:34.540 Model Number: QEMU NVMe Ctrl 00:10:34.540 Firmware Version: 8.0.0 00:10:34.540 Recommended Arb Burst: 6 00:10:34.540 IEEE OUI Identifier: 00 54 52 00:10:34.540 Multi-path I/O 00:10:34.540 May have multiple subsystem ports: No 00:10:34.540 May have multiple controllers: No 00:10:34.540 Associated with SR-IOV VF: No 00:10:34.540 Max Data Transfer Size: 524288 00:10:34.540 Max Number of Namespaces: 256 00:10:34.540 Max Number of I/O Queues: 64 00:10:34.540 NVMe Specification Version (VS): 1.4 00:10:34.540 NVMe Specification Version (Identify): 1.4 00:10:34.540 Maximum Queue Entries: 2048 00:10:34.540 Contiguous Queues Required: Yes 00:10:34.540 Arbitration Mechanisms Supported 00:10:34.540 Weighted Round Robin: Not Supported 00:10:34.540 Vendor Specific: Not Supported 00:10:34.540 Reset Timeout: 7500 ms 00:10:34.540 Doorbell Stride: 4 bytes 00:10:34.540 NVM Subsystem Reset: Not Supported 00:10:34.540 Command Sets Supported 00:10:34.540 NVM Command Set: Supported 00:10:34.540 Boot Partition: Not Supported 00:10:34.540 Memory Page Size Minimum: 4096 bytes 00:10:34.540 Memory Page Size Maximum: 65536 bytes 00:10:34.540 Persistent Memory Region: Not Supported 00:10:34.540 Optional Asynchronous Events Supported 00:10:34.540 Namespace Attribute Notices: Supported 00:10:34.540 Firmware Activation Notices: Not Supported 00:10:34.540 ANA Change Notices: Not Supported 00:10:34.540 PLE Aggregate Log Change Notices: Not Supported 00:10:34.540 LBA Status Info Alert Notices: Not Supported 00:10:34.540 EGE Aggregate Log Change Notices: Not Supported 00:10:34.540 Normal NVM Subsystem Shutdown event: Not Supported 00:10:34.540 Zone Descriptor Change Notices: Not Supported 00:10:34.540 Discovery Log Change Notices: Not Supported 00:10:34.540 Controller Attributes 00:10:34.540 128-bit Host Identifier: Not Supported 00:10:34.540 Non-Operational Permissive Mode: Not Supported 00:10:34.540 NVM Sets: Not Supported 00:10:34.540 Read Recovery Levels: Not Supported 00:10:34.540 Endurance Groups: Not Supported 00:10:34.540 Predictable Latency Mode: Not Supported 00:10:34.540 Traffic Based Keep ALive: Not Supported 00:10:34.540 Namespace Granularity: Not Supported 00:10:34.540 SQ Associations: Not Supported 00:10:34.540 UUID List: Not Supported 00:10:34.540 Multi-Domain Subsystem: Not Supported 00:10:34.540 Fixed Capacity Management: Not Supported 00:10:34.540 Variable Capacity Management: Not Supported 00:10:34.540 Delete Endurance Group: Not Supported 00:10:34.540 Delete NVM Set: Not Supported 00:10:34.540 Extended LBA Formats Supported: Supported 00:10:34.540 Flexible Data Placement Supported: Not Supported 00:10:34.540 00:10:34.540 Controller Memory Buffer Support 00:10:34.540 ================================ 00:10:34.540 Supported: No 00:10:34.540 00:10:34.540 Persistent Memory Region Support 00:10:34.540 ================================ 00:10:34.540 Supported: No 00:10:34.540 00:10:34.540 Admin Command Set Attributes 00:10:34.540 ============================ 00:10:34.540 Security Send/Receive: Not Supported 00:10:34.540 Format NVM: Supported 00:10:34.540 Firmware Activate/Download: Not Supported 00:10:34.540 Namespace Management: Supported 00:10:34.540 Device Self-Test: Not Supported 00:10:34.540 Directives: Supported 00:10:34.540 NVMe-MI: Not Supported 00:10:34.540 Virtualization Management: Not Supported 00:10:34.540 Doorbell Buffer Config: Supported 00:10:34.540 Get LBA Status Capability: Not Supported 00:10:34.540 Command & Feature Lockdown Capability: Not Supported 00:10:34.540 Abort Command Limit: 4 00:10:34.540 Async Event Request Limit: 4 00:10:34.540 Number of Firmware Slots: N/A 00:10:34.540 Firmware Slot 1 Read-Only: N/A 00:10:34.540 Firmware Activation Without Reset: N/A 00:10:34.540 Multiple Update Detection Support: N/A 00:10:34.540 Firmware Update Granularity: No Information Provided 00:10:34.540 Per-Namespace SMART Log: Yes 00:10:34.540 Asymmetric Namespace Access Log Page: Not Supported 00:10:34.540 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:34.540 Command Effects Log Page: Supported 00:10:34.540 Get Log Page Extended Data: Supported 00:10:34.540 Telemetry Log Pages: Not Supported 00:10:34.540 Persistent Event Log Pages: Not Supported 00:10:34.540 Supported Log Pages Log Page: May Support 00:10:34.540 Commands Supported & Effects Log Page: Not Supported 00:10:34.540 Feature Identifiers & Effects Log Page:May Support 00:10:34.540 NVMe-MI Commands & Effects Log Page: May Support 00:10:34.540 Data Area 4 for Telemetry Log: Not Supported 00:10:34.540 Error Log Page Entries Supported: 1 00:10:34.540 Keep Alive: Not Supported 00:10:34.540 00:10:34.540 NVM Command Set Attributes 00:10:34.540 ========================== 00:10:34.540 Submission Queue Entry Size 00:10:34.540 Max: 64 00:10:34.540 Min: 64 00:10:34.540 Completion Queue Entry Size 00:10:34.540 Max: 16 00:10:34.540 Min: 16 00:10:34.540 Number of Namespaces: 256 00:10:34.540 Compare Command: Supported 00:10:34.540 Write Uncorrectable Command: Not Supported 00:10:34.540 Dataset Management Command: Supported 00:10:34.540 Write Zeroes Command: Supported 00:10:34.540 Set Features Save Field: Supported 00:10:34.540 Reservations: Not Supported 00:10:34.540 Timestamp: Supported 00:10:34.540 Copy: Supported 00:10:34.540 Volatile Write Cache: Present 00:10:34.540 Atomic Write Unit (Normal): 1 00:10:34.540 Atomic Write Unit (PFail): 1 00:10:34.540 Atomic Compare & Write Unit: 1 00:10:34.540 Fused Compare & Write: Not Supported 00:10:34.540 Scatter-Gather List 00:10:34.540 SGL Command Set: Supported 00:10:34.540 SGL Keyed: Not Supported 00:10:34.540 SGL Bit Bucket Descriptor: Not Supported 00:10:34.540 SGL Metadata Pointer: Not Supported 00:10:34.540 Oversized SGL: Not Supported 00:10:34.540 SGL Metadata Address: Not Supported 00:10:34.540 SGL Offset: Not Supported 00:10:34.540 Transport SGL Data Block: Not Supported 00:10:34.540 Replay Protected Memory Block: Not Supported 00:10:34.540 00:10:34.540 Firmware Slot Information 00:10:34.540 ========================= 00:10:34.540 Active slot: 1 00:10:34.540 Slot 1 Firmware Revision: 1.0 00:10:34.540 00:10:34.540 00:10:34.540 Commands Supported and Effects 00:10:34.540 ============================== 00:10:34.540 Admin Commands 00:10:34.540 -------------- 00:10:34.540 Delete I/O Submission Queue (00h): Supported 00:10:34.540 Create I/O Submission Queue (01h): Supported 00:10:34.540 Get Log Page (02h): Supported 00:10:34.540 Delete I/O Completion Queue (04h): Supported 00:10:34.540 Create I/O Completion Queue (05h): Supported 00:10:34.540 Identify (06h): Supported 00:10:34.540 Abort (08h): Supported 00:10:34.540 Set Features (09h): Supported 00:10:34.540 Get Features (0Ah): Supported 00:10:34.540 Asynchronous Event Request (0Ch): Supported 00:10:34.540 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:34.540 Directive Send (19h): Supported 00:10:34.540 Directive Receive (1Ah): Supported 00:10:34.540 Virtualization Management (1Ch): Supported 00:10:34.540 Doorbell Buffer Config (7Ch): Supported 00:10:34.540 Format NVM (80h): Supported LBA-Change 00:10:34.540 I/O Commands 00:10:34.540 ------------ 00:10:34.540 Flush (00h): Supported LBA-Change 00:10:34.540 Write (01h): Supported LBA-Change 00:10:34.540 Read (02h): Supported 00:10:34.540 Compare (05h): Supported 00:10:34.540 Write Zeroes (08h): Supported LBA-Change 00:10:34.540 Dataset Management (09h): Supported LBA-Change 00:10:34.540 Unknown (0Ch): Supported 00:10:34.540 Unknown (12h): Supported 00:10:34.540 Copy (19h): Supported LBA-Change 00:10:34.540 Unknown (1Dh): Supported LBA-Change 00:10:34.540 00:10:34.540 Error Log 00:10:34.540 ========= 00:10:34.540 00:10:34.540 Arbitration 00:10:34.540 =========== 00:10:34.540 Arbitration Burst: no limit 00:10:34.540 00:10:34.540 Power Management 00:10:34.540 ================ 00:10:34.540 Number of Power States: 1 00:10:34.540 Current Power State: Power State #0 00:10:34.540 Power State #0: 00:10:34.540 Max Power: 25.00 W 00:10:34.540 Non-Operational State: Operational 00:10:34.540 Entry Latency: 16 microseconds 00:10:34.540 Exit Latency: 4 microseconds 00:10:34.540 Relative Read Throughput: 0 00:10:34.540 Relative Read Latency: 0 00:10:34.540 Relative Write Throughput: 0 00:10:34.540 Relative Write Latency: 0 00:10:34.540 Idle Power: Not Reported 00:10:34.540 Active Power: Not Reported 00:10:34.540 Non-Operational Permissive Mode: Not Supported 00:10:34.540 00:10:34.540 Health Information 00:10:34.540 ================== 00:10:34.540 Critical Warnings: 00:10:34.540 Available Spare Space: OK 00:10:34.540 Temperature: OK 00:10:34.540 Device Reliability: OK 00:10:34.540 Read Only: No 00:10:34.540 Volatile Memory Backup: OK 00:10:34.540 Current Temperature: 323 Kelvin (50 Celsius) 00:10:34.540 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:34.540 Available Spare: 0% 00:10:34.540 Available Spare Threshold: 0% 00:10:34.540 Life Percentage Used: 0% 00:10:34.540 Data Units Read: 999 00:10:34.540 Data Units Written: 866 00:10:34.540 Host Read Commands: 48666 00:10:34.540 Host Write Commands: 47456 00:10:34.540 Controller Busy Time: 0 minutes 00:10:34.540 Power Cycles: 0 00:10:34.540 Power On Hours: 0 hours 00:10:34.540 Unsafe Shutdowns: 0 00:10:34.540 Unrecoverable Media Errors: 0 00:10:34.540 Lifetime Error Log Entries: 0 00:10:34.540 Warning Temperature Time: 0 minutes 00:10:34.540 Critical Temperature Time: 0 minutes 00:10:34.540 00:10:34.540 Number of Queues 00:10:34.540 ================ 00:10:34.540 Number of I/O Submission Queues: 64 00:10:34.540 Number of I/O Completion Queues: 64 00:10:34.540 00:10:34.540 ZNS Specific Controller Data 00:10:34.540 ============================ 00:10:34.540 Zone Append Size Limit: 0 00:10:34.540 00:10:34.540 00:10:34.540 Active Namespaces 00:10:34.540 ================= 00:10:34.540 Namespace ID:1 00:10:34.540 Error Recovery Timeout: Unlimited 00:10:34.540 Command Set Identifier: NVM (00h) 00:10:34.540 Deallocate: Supported 00:10:34.540 Deallocated/Unwritten Error: Supported 00:10:34.540 Deallocated Read Value: All 0x00 00:10:34.540 Deallocate in Write Zeroes: Not Supported 00:10:34.540 Deallocated Guard Field: 0xFFFF 00:10:34.540 Flush: Supported 00:10:34.540 Reservation: Not Supported 00:10:34.540 Namespace Sharing Capabilities: Private 00:10:34.540 Size (in LBAs): 1310720 (5GiB) 00:10:34.540 Capacity (in LBAs): 1310720 (5GiB) 00:10:34.540 Utilization (in LBAs): 1310720 (5GiB) 00:10:34.540 Thin Provisioning: Not Supported 00:10:34.540 Per-NS Atomic Units: No 00:10:34.540 Maximum Single Source Range Length: 128 00:10:34.540 Maximum Copy Length: 128 00:10:34.540 Maximum Source Range Count: 128 00:10:34.540 NGUID/EUI64 Never Reused: No 00:10:34.540 Namespace Write Protected: No 00:10:34.540 Number of LBA Formats: 8 00:10:34.540 Current LBA Format: LBA Format #04 00:10:34.540 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:34.540 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:34.540 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:34.540 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:34.540 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:34.540 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:34.540 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:34.540 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:34.540 00:10:34.540 NVM Specific Namespace Data 00:10:34.540 =========================== 00:10:34.540 Logical Block Storage Tag Mask: 0 00:10:34.540 Protection Information Capabilities: 00:10:34.540 16b Guard Protection Information Storage Tag Support: No 00:10:34.540 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:34.540 Storage Tag Check Read Support: No 00:10:34.540 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.540 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.540 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.540 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.541 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.541 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.541 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.541 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.541 ===================================================== 00:10:34.541 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:34.541 ===================================================== 00:10:34.541 Controller Capabilities/Features 00:10:34.541 ================================ 00:10:34.541 Vendor ID: 1b36 00:10:34.541 Subsystem Vendor ID: 1af4 00:10:34.541 Serial Number: 12343 00:10:34.541 Model Number: QEMU NVMe Ctrl 00:10:34.541 Firmware Version: 8.0.0 00:10:34.541 Recommended Arb Burst: 6 00:10:34.541 IEEE OUI Identifier: 00 54 52 00:10:34.541 Multi-path I/O 00:10:34.541 May have multiple subsystem ports: No 00:10:34.541 May have multiple controllers: Yes 00:10:34.541 Associated with SR-IOV VF: No 00:10:34.541 Max Data Transfer Size: 524288 00:10:34.541 Max Number of Namespaces: 256 00:10:34.541 Max Number of I/O Queues: 64 00:10:34.541 NVMe Specification Version (VS): 1.4 00:10:34.541 NVMe Specification Version (Identify): 1.4 00:10:34.541 Maximum Queue Entries: 2048 00:10:34.541 Contiguous Queues Required: Yes 00:10:34.541 Arbitration Mechanisms Supported 00:10:34.541 Weighted Round Robin: Not Supported 00:10:34.541 Vendor Specific: Not Supported 00:10:34.541 Reset Timeout: 7500 ms 00:10:34.541 Doorbell Stride: 4 bytes 00:10:34.541 NVM Subsystem Reset: Not Supported 00:10:34.541 Command Sets Supported 00:10:34.541 NVM Command Set: Supported 00:10:34.541 Boot Partition: Not Supported 00:10:34.541 Memory Page Size Minimum: 4096 bytes 00:10:34.541 Memory Page Size Maximum: 65536 bytes 00:10:34.541 Persistent Memory Region: Not Supported 00:10:34.541 Optional Asynchronous Events Supported 00:10:34.541 Namespace Attribute Notices: Supported 00:10:34.541 Firmware Activation Notices: Not Supported 00:10:34.541 ANA Change Notices: Not Supported 00:10:34.541 PLE Aggregate Log Change Notices: Not Supported 00:10:34.541 LBA Status Info Alert Notices: Not Supported 00:10:34.541 EGE Aggregate Log Change Notices: Not Supported 00:10:34.541 Normal NVM Subsystem Shutdown event: Not Supported 00:10:34.541 Zone Descriptor Change Notices: Not Supported 00:10:34.541 Discovery Log Change Notices: Not Supported 00:10:34.541 Controller Attributes 00:10:34.541 128-bit Host Identifier: Not Supported 00:10:34.541 Non-Operational Permissive Mode: Not Supported 00:10:34.541 NVM Sets: Not Supported 00:10:34.541 Read Recovery Levels: Not Supported 00:10:34.541 Endurance Groups: Supported 00:10:34.541 Predictable Latency Mode: Not Supported 00:10:34.541 Traffic Based Keep ALive: Not Supported 00:10:34.541 Namespace Granularity: Not Supported 00:10:34.541 SQ Associations: Not Supported 00:10:34.541 UUID List: Not Supported 00:10:34.541 Multi-Domain Subsystem: Not Supported 00:10:34.541 Fixed Capacity Management: Not Supported 00:10:34.541 Variable Capacity Management: Not Supported 00:10:34.541 Delete Endurance Group: Not Supported 00:10:34.541 Delete NVM Set: Not Supported 00:10:34.541 Extended LBA Formats Supported: Supported 00:10:34.541 Flexible Data Placement Supported: Supported 00:10:34.541 00:10:34.541 Controller Memory Buffer Support 00:10:34.541 ================================ 00:10:34.541 Supported: No 00:10:34.541 00:10:34.541 Persistent Memory Region Support 00:10:34.541 ================================ 00:10:34.541 Supported: No 00:10:34.541 00:10:34.541 Admin Command Set Attributes 00:10:34.541 ============================ 00:10:34.541 Security Send/Receive: Not Supported 00:10:34.541 Format NVM: Supported 00:10:34.541 Firmware Activate/Download: Not Supported 00:10:34.541 Namespace Management: Supported 00:10:34.541 Device Self-Test: Not Supported 00:10:34.541 Directives: Supported 00:10:34.541 NVMe-MI: Not Supported 00:10:34.541 Virtualization Management: Not Supported 00:10:34.541 Doorbell Buffer Config: Supported 00:10:34.541 Get LBA Status Capability: Not Supported 00:10:34.541 Command & Feature Lockdown Capability: Not Supported 00:10:34.541 Abort Command Limit: 4 00:10:34.541 Async Event Request Limit: 4 00:10:34.541 Number of Firmware Slots: N/A 00:10:34.541 Firmware Slot 1 Read-Only: N/A 00:10:34.541 Firmware Activation Without Reset: N/A 00:10:34.541 Multiple Update Detection Support: N/A 00:10:34.541 Firmware Update Granularity: No Information Provided 00:10:34.541 Per-Namespace SMART Log: Yes 00:10:34.541 Asymmetric Namespace Access Log Page: Not Supported 00:10:34.541 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:34.541 Command Effects Log Page: Supported 00:10:34.541 Get Log Page Extended Data: Supported 00:10:34.541 Telemetry Log Pages: Not Supported 00:10:34.541 Persistent Event Log Pages: Not Supported 00:10:34.541 Supported Log Pages Log Page: May Support 00:10:34.541 Commands Supported & Effects Log Page: Not Supported 00:10:34.541 Feature Identifiers & Effects Log Page:May Support 00:10:34.541 NVMe-MI Commands & Effects Log Page: May Support 00:10:34.541 Data Area 4 for Telemetry Log: Not Supported 00:10:34.541 Error Log Page Entries Supported: 1 00:10:34.541 Keep Alive: Not Supported 00:10:34.541 00:10:34.541 NVM Command Set Attributes 00:10:34.541 ========================== 00:10:34.541 Submission Queue Entry Size 00:10:34.541 Max: 64 00:10:34.541 Min: 64 00:10:34.541 Completion Queue Entry Size 00:10:34.541 Max: 16 00:10:34.541 Min: 16 00:10:34.541 Number of Namespaces: 256 00:10:34.541 Compare Command: Supported 00:10:34.541 Write Uncorrectable Command: Not Supported 00:10:34.541 Dataset Management Command: Supported 00:10:34.541 Write Zeroes Command: Supported 00:10:34.541 Set Features Save Field: Supported 00:10:34.541 Reservations: Not Supported 00:10:34.541 Timestamp: Supported 00:10:34.541 Copy: Supported 00:10:34.541 Volatile Write Cache: Present 00:10:34.541 Atomic Write Unit (Normal): 1 00:10:34.541 Atomic Write Unit (PFail): 1 00:10:34.541 Atomic Compare & Write Unit: 1 00:10:34.541 Fused Compare & Write: Not Supported 00:10:34.541 Scatter-Gather List 00:10:34.541 SGL Command Set: Supported 00:10:34.541 SGL Keyed: Not Supported 00:10:34.541 SGL Bit Bucket Descriptor: Not Supported 00:10:34.541 SGL Metadata Pointer: Not Supported 00:10:34.541 Oversized SGL: Not Supported 00:10:34.541 SGL Metadata Address: Not Supported 00:10:34.541 SGL Offset: Not Supported 00:10:34.541 Transport SGL Data Block: Not Supported 00:10:34.541 Replay Protected Memory Block: Not Supported 00:10:34.541 00:10:34.541 Firmware Slot Information 00:10:34.541 ========================= 00:10:34.541 Active slot: 1 00:10:34.541 Slot 1 Firmware Revision: 1.0 00:10:34.541 00:10:34.541 00:10:34.541 Commands Supported and Effects 00:10:34.541 ============================== 00:10:34.541 Admin Commands 00:10:34.541 -------------- 00:10:34.541 Delete I/O Submission Queue (00h): Supported 00:10:34.541 Create I/O Submission Queue (01h): Supported 00:10:34.541 Get Log Page (02h): Supported 00:10:34.541 Delete I/O Completion Queue (04h): Supported 00:10:34.541 Create I/O Completion Queue (05h): Supported 00:10:34.541 Identify (06h): Supported 00:10:34.541 Abort (08h): Supported 00:10:34.541 Set Features (09h): Supported 00:10:34.541 Get Features (0Ah): Supported 00:10:34.541 Asynchronous Event Request (0Ch): Supported 00:10:34.541 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:34.541 Directive Send (19h): Supported 00:10:34.541 Directive Receive (1Ah): Supported 00:10:34.541 Virtualization Management (1Ch): Supported 00:10:34.541 Doorbell Buffer Config (7Ch): Supported 00:10:34.541 Format NVM (80h): Supported LBA-Change 00:10:34.541 I/O Commands 00:10:34.541 ------------ 00:10:34.541 Flush (00h): Supported LBA-Change 00:10:34.541 Write (01h): Supported LBA-Change 00:10:34.541 Read (02h): Supported 00:10:34.541 Compare (05h): Supported 00:10:34.541 Write Zeroes (08h): Supported LBA-Change 00:10:34.541 Dataset Management (09h): Supported LBA-Change 00:10:34.541 Unknown (0Ch): Supported 00:10:34.541 Unknown (12h): Supported 00:10:34.541 Copy (19h): Supported LBA-Change 00:10:34.541 Unknown (1Dh): Supported LBA-Change 00:10:34.541 00:10:34.541 Error Log 00:10:34.541 ========= 00:10:34.541 00:10:34.541 Arbitration 00:10:34.541 =========== 00:10:34.541 Arbitration Burst: no limit 00:10:34.541 00:10:34.541 Power Management 00:10:34.541 ================ 00:10:34.541 Number of Power States: 1 00:10:34.541 Current Power State: Power State #0 00:10:34.541 Power State #0: 00:10:34.541 Max Power: 25.00 W 00:10:34.541 Non-Operational State: Operational 00:10:34.541 Entry Latency: 16 microseconds 00:10:34.541 Exit Latency: 4 microseconds 00:10:34.541 Relative Read Throughput: 0 00:10:34.541 Relative Read Latency: 0 00:10:34.541 Relative Write Throughput: 0 00:10:34.541 Relative Write Latency: 0 00:10:34.541 Idle Power: Not Reported 00:10:34.541 Active Power: Not Reported 00:10:34.541 Non-Operational Permissive Mode: Not Supported 00:10:34.541 00:10:34.541 Health Information 00:10:34.541 ================== 00:10:34.541 Critical Warnings: 00:10:34.541 Available Spare Space: OK 00:10:34.541 Temperature: OK 00:10:34.541 Device Reliability: OK 00:10:34.541 Read Only: No 00:10:34.541 Volatile Memory Backup: OK 00:10:34.541 Current Temperature: 323 Kelvin (50 Celsius) 00:10:34.541 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:34.541 Available Spare: 0% 00:10:34.541 Available Spare Threshold: 0% 00:10:34.541 Life Percentage Used: 0% 00:10:34.541 Data Units Read: 771 00:10:34.541 Data Units Written: 700 00:10:34.541 Host Read Commands: 33880 00:10:34.541 Host Write Commands: 33303 00:10:34.541 Controller Busy Time: 0 minutes 00:10:34.541 Power Cycles: 0 00:10:34.541 Power On Hours: 0 hours 00:10:34.541 Unsafe Shutdowns: 0 00:10:34.541 Unrecoverable Media Errors: 0 00:10:34.541 Lifetime Error Log Entries: 0 00:10:34.541 Warning Temperature Time: 0 minutes 00:10:34.541 Critical Temperature Time: 0 minutes 00:10:34.541 00:10:34.541 Number of Queues 00:10:34.541 ================ 00:10:34.541 Number of I/O Submission Queues: 64 00:10:34.541 Number of I/O Completion Queues: 64 00:10:34.541 00:10:34.541 ZNS Specific Controller Data 00:10:34.541 ============================ 00:10:34.541 Zone Append Size Limit: 0 00:10:34.541 00:10:34.541 00:10:34.541 Active Namespaces 00:10:34.541 ================= 00:10:34.541 Namespace ID:1 00:10:34.541 Error Recovery Timeout: Unlimited 00:10:34.541 Command Set Identifier: NVM (00h) 00:10:34.541 Deallocate: Supported 00:10:34.541 Deallocated/Unwritten Error: Supported 00:10:34.541 Deallocated Read Value: All 0x00 00:10:34.541 Deallocate in Write Zeroes: Not Supported 00:10:34.541 Deallocated Guard Field: 0xFFFF 00:10:34.541 Flush: Supported 00:10:34.541 Reservation: Not Supported 00:10:34.541 Namespace Sharing Capabilities: Multiple Controllers 00:10:34.541 Size (in LBAs): 262144 (1GiB) 00:10:34.541 Capacity (in LBAs): 262144 (1GiB) 00:10:34.541 Utilization (in LBAs): 262144 (1GiB) 00:10:34.541 Thin Provisioning: Not Supported 00:10:34.541 Per-NS Atomic Units: No 00:10:34.541 Maximum Single Source Range Length: 128 00:10:34.541 Maximum Copy Length: 128 00:10:34.541 Maximum Source Range Count: 128 00:10:34.541 NGUID/EUI64 Never Reused: No 00:10:34.541 Namespace Write Protected: No 00:10:34.541 Endurance group ID: 1 00:10:34.541 Number of LBA Formats: 8 00:10:34.541 Current LBA Format: LBA Format #04 00:10:34.541 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:34.541 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:34.541 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:34.541 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:34.541 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:34.541 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:34.541 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:34.541 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:34.541 00:10:34.541 Get Feature FDP: 00:10:34.541 ================ 00:10:34.541 Enabled: Yes 00:10:34.541 FDP configuration index: 0 00:10:34.541 00:10:34.541 FDP configurations log page 00:10:34.541 =========================== 00:10:34.541 Number of FDP configurations: 1 00:10:34.541 Version: 0 00:10:34.541 Size: 112 00:10:34.541 FDP Configuration Descriptor: 0 00:10:34.541 Descriptor Size: 96 00:10:34.541 Reclaim Group Identifier format: 2 00:10:34.541 FDP Volatile Write Cache: Not Present 00:10:34.541 FDP Configuration: Valid 00:10:34.541 Vendor Specific Size: 0 00:10:34.541 Number of Reclaim Groups: 2 00:10:34.541 Number of Recalim Unit Handles: 8 00:10:34.541 Max Placement Identifiers: 128 00:10:34.541 Number of Namespaces Suppprted: 256 00:10:34.541 Reclaim unit Nominal Size: 6000000 bytes 00:10:34.541 Estimated Reclaim Unit Time Limit: Not Reported 00:10:34.541 RUH Desc #000: RUH Type: Initially Isolated 00:10:34.541 RUH Desc #001: RUH Type: Initially Isolated 00:10:34.541 RUH Desc #002: RUH Type: Initially Isolated 00:10:34.541 RUH Desc #003: RUH Type: Initially Isolated 00:10:34.541 RUH Desc #004: RUH Type: Initially Isolated 00:10:34.541 RUH Desc #005: RUH Type: Initially Isolated 00:10:34.541 RUH Desc #006: RUH Type: Initially Isolated 00:10:34.541 RUH Desc #007: RUH Type: Initially Isolated 00:10:34.541 00:10:34.541 FDP reclaim unit handle usage log page 00:10:34.541 ====================================== 00:10:34.541 Number of Reclaim Unit Handles: 8 00:10:34.541 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:34.541 RUH Usage Desc #001: RUH Attributes: Unused 00:10:34.541 RUH Usage Desc #002: RUH Attributes: Unused 00:10:34.541 RUH Usage Desc #003: RUH Attributes: Unused 00:10:34.541 RUH Usage Desc #004: RUH Attributes: Unused 00:10:34.541 RUH Usage Desc #005: RUH Attributes: Unused 00:10:34.541 RUH Usage Desc #006: RUH Attributes: Unused 00:10:34.541 RUH Usage Desc #007: RUH Attributes: Unused 00:10:34.541 00:10:34.541 FDP statistics log page 00:10:34.541 ======================= 00:10:34.541 Host bytes with metadata written: 442277888 00:10:34.541 Mediated unexpected 00:10:34.541 [2024-11-26 18:12:08.812203] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64315 terminated unexpected 00:10:34.541 [2024-11-26 18:12:08.813007] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64315 terminated unexpected 00:10:34.541 [2024-11-26 18:12:08.814535] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64315 terminated unexpected 00:10:34.541 bytes with metadata written: 442343424 00:10:34.541 Media bytes erased: 0 00:10:34.541 00:10:34.541 FDP events log page 00:10:34.541 =================== 00:10:34.541 Number of FDP events: 0 00:10:34.541 00:10:34.541 NVM Specific Namespace Data 00:10:34.541 =========================== 00:10:34.541 Logical Block Storage Tag Mask: 0 00:10:34.541 Protection Information Capabilities: 00:10:34.541 16b Guard Protection Information Storage Tag Support: No 00:10:34.541 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:34.541 Storage Tag Check Read Support: No 00:10:34.541 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.541 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.541 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.541 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.541 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.541 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.541 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.541 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.541 ===================================================== 00:10:34.541 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:34.541 ===================================================== 00:10:34.541 Controller Capabilities/Features 00:10:34.541 ================================ 00:10:34.541 Vendor ID: 1b36 00:10:34.541 Subsystem Vendor ID: 1af4 00:10:34.541 Serial Number: 12342 00:10:34.541 Model Number: QEMU NVMe Ctrl 00:10:34.541 Firmware Version: 8.0.0 00:10:34.541 Recommended Arb Burst: 6 00:10:34.541 IEEE OUI Identifier: 00 54 52 00:10:34.541 Multi-path I/O 00:10:34.541 May have multiple subsystem ports: No 00:10:34.541 May have multiple controllers: No 00:10:34.542 Associated with SR-IOV VF: No 00:10:34.542 Max Data Transfer Size: 524288 00:10:34.542 Max Number of Namespaces: 256 00:10:34.542 Max Number of I/O Queues: 64 00:10:34.542 NVMe Specification Version (VS): 1.4 00:10:34.542 NVMe Specification Version (Identify): 1.4 00:10:34.542 Maximum Queue Entries: 2048 00:10:34.542 Contiguous Queues Required: Yes 00:10:34.542 Arbitration Mechanisms Supported 00:10:34.542 Weighted Round Robin: Not Supported 00:10:34.542 Vendor Specific: Not Supported 00:10:34.542 Reset Timeout: 7500 ms 00:10:34.542 Doorbell Stride: 4 bytes 00:10:34.542 NVM Subsystem Reset: Not Supported 00:10:34.542 Command Sets Supported 00:10:34.542 NVM Command Set: Supported 00:10:34.542 Boot Partition: Not Supported 00:10:34.542 Memory Page Size Minimum: 4096 bytes 00:10:34.542 Memory Page Size Maximum: 65536 bytes 00:10:34.542 Persistent Memory Region: Not Supported 00:10:34.542 Optional Asynchronous Events Supported 00:10:34.542 Namespace Attribute Notices: Supported 00:10:34.542 Firmware Activation Notices: Not Supported 00:10:34.542 ANA Change Notices: Not Supported 00:10:34.542 PLE Aggregate Log Change Notices: Not Supported 00:10:34.542 LBA Status Info Alert Notices: Not Supported 00:10:34.542 EGE Aggregate Log Change Notices: Not Supported 00:10:34.542 Normal NVM Subsystem Shutdown event: Not Supported 00:10:34.542 Zone Descriptor Change Notices: Not Supported 00:10:34.542 Discovery Log Change Notices: Not Supported 00:10:34.542 Controller Attributes 00:10:34.542 128-bit Host Identifier: Not Supported 00:10:34.542 Non-Operational Permissive Mode: Not Supported 00:10:34.542 NVM Sets: Not Supported 00:10:34.542 Read Recovery Levels: Not Supported 00:10:34.542 Endurance Groups: Not Supported 00:10:34.542 Predictable Latency Mode: Not Supported 00:10:34.542 Traffic Based Keep ALive: Not Supported 00:10:34.542 Namespace Granularity: Not Supported 00:10:34.542 SQ Associations: Not Supported 00:10:34.542 UUID List: Not Supported 00:10:34.542 Multi-Domain Subsystem: Not Supported 00:10:34.542 Fixed Capacity Management: Not Supported 00:10:34.542 Variable Capacity Management: Not Supported 00:10:34.542 Delete Endurance Group: Not Supported 00:10:34.542 Delete NVM Set: Not Supported 00:10:34.542 Extended LBA Formats Supported: Supported 00:10:34.542 Flexible Data Placement Supported: Not Supported 00:10:34.542 00:10:34.542 Controller Memory Buffer Support 00:10:34.542 ================================ 00:10:34.542 Supported: No 00:10:34.542 00:10:34.542 Persistent Memory Region Support 00:10:34.542 ================================ 00:10:34.542 Supported: No 00:10:34.542 00:10:34.542 Admin Command Set Attributes 00:10:34.542 ============================ 00:10:34.542 Security Send/Receive: Not Supported 00:10:34.542 Format NVM: Supported 00:10:34.542 Firmware Activate/Download: Not Supported 00:10:34.542 Namespace Management: Supported 00:10:34.542 Device Self-Test: Not Supported 00:10:34.542 Directives: Supported 00:10:34.542 NVMe-MI: Not Supported 00:10:34.542 Virtualization Management: Not Supported 00:10:34.542 Doorbell Buffer Config: Supported 00:10:34.542 Get LBA Status Capability: Not Supported 00:10:34.542 Command & Feature Lockdown Capability: Not Supported 00:10:34.542 Abort Command Limit: 4 00:10:34.542 Async Event Request Limit: 4 00:10:34.542 Number of Firmware Slots: N/A 00:10:34.542 Firmware Slot 1 Read-Only: N/A 00:10:34.542 Firmware Activation Without Reset: N/A 00:10:34.542 Multiple Update Detection Support: N/A 00:10:34.542 Firmware Update Granularity: No Information Provided 00:10:34.542 Per-Namespace SMART Log: Yes 00:10:34.542 Asymmetric Namespace Access Log Page: Not Supported 00:10:34.542 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:34.542 Command Effects Log Page: Supported 00:10:34.542 Get Log Page Extended Data: Supported 00:10:34.542 Telemetry Log Pages: Not Supported 00:10:34.542 Persistent Event Log Pages: Not Supported 00:10:34.542 Supported Log Pages Log Page: May Support 00:10:34.542 Commands Supported & Effects Log Page: Not Supported 00:10:34.542 Feature Identifiers & Effects Log Page:May Support 00:10:34.542 NVMe-MI Commands & Effects Log Page: May Support 00:10:34.542 Data Area 4 for Telemetry Log: Not Supported 00:10:34.542 Error Log Page Entries Supported: 1 00:10:34.542 Keep Alive: Not Supported 00:10:34.542 00:10:34.542 NVM Command Set Attributes 00:10:34.542 ========================== 00:10:34.542 Submission Queue Entry Size 00:10:34.542 Max: 64 00:10:34.542 Min: 64 00:10:34.542 Completion Queue Entry Size 00:10:34.542 Max: 16 00:10:34.542 Min: 16 00:10:34.542 Number of Namespaces: 256 00:10:34.542 Compare Command: Supported 00:10:34.542 Write Uncorrectable Command: Not Supported 00:10:34.542 Dataset Management Command: Supported 00:10:34.542 Write Zeroes Command: Supported 00:10:34.542 Set Features Save Field: Supported 00:10:34.542 Reservations: Not Supported 00:10:34.542 Timestamp: Supported 00:10:34.542 Copy: Supported 00:10:34.542 Volatile Write Cache: Present 00:10:34.542 Atomic Write Unit (Normal): 1 00:10:34.542 Atomic Write Unit (PFail): 1 00:10:34.542 Atomic Compare & Write Unit: 1 00:10:34.542 Fused Compare & Write: Not Supported 00:10:34.542 Scatter-Gather List 00:10:34.542 SGL Command Set: Supported 00:10:34.542 SGL Keyed: Not Supported 00:10:34.542 SGL Bit Bucket Descriptor: Not Supported 00:10:34.542 SGL Metadata Pointer: Not Supported 00:10:34.542 Oversized SGL: Not Supported 00:10:34.542 SGL Metadata Address: Not Supported 00:10:34.542 SGL Offset: Not Supported 00:10:34.542 Transport SGL Data Block: Not Supported 00:10:34.542 Replay Protected Memory Block: Not Supported 00:10:34.542 00:10:34.542 Firmware Slot Information 00:10:34.542 ========================= 00:10:34.542 Active slot: 1 00:10:34.542 Slot 1 Firmware Revision: 1.0 00:10:34.542 00:10:34.542 00:10:34.542 Commands Supported and Effects 00:10:34.542 ============================== 00:10:34.542 Admin Commands 00:10:34.542 -------------- 00:10:34.542 Delete I/O Submission Queue (00h): Supported 00:10:34.542 Create I/O Submission Queue (01h): Supported 00:10:34.542 Get Log Page (02h): Supported 00:10:34.542 Delete I/O Completion Queue (04h): Supported 00:10:34.542 Create I/O Completion Queue (05h): Supported 00:10:34.542 Identify (06h): Supported 00:10:34.542 Abort (08h): Supported 00:10:34.542 Set Features (09h): Supported 00:10:34.542 Get Features (0Ah): Supported 00:10:34.542 Asynchronous Event Request (0Ch): Supported 00:10:34.542 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:34.542 Directive Send (19h): Supported 00:10:34.542 Directive Receive (1Ah): Supported 00:10:34.542 Virtualization Management (1Ch): Supported 00:10:34.542 Doorbell Buffer Config (7Ch): Supported 00:10:34.542 Format NVM (80h): Supported LBA-Change 00:10:34.542 I/O Commands 00:10:34.542 ------------ 00:10:34.542 Flush (00h): Supported LBA-Change 00:10:34.542 Write (01h): Supported LBA-Change 00:10:34.542 Read (02h): Supported 00:10:34.542 Compare (05h): Supported 00:10:34.542 Write Zeroes (08h): Supported LBA-Change 00:10:34.542 Dataset Management (09h): Supported LBA-Change 00:10:34.542 Unknown (0Ch): Supported 00:10:34.542 Unknown (12h): Supported 00:10:34.542 Copy (19h): Supported LBA-Change 00:10:34.542 Unknown (1Dh): Supported LBA-Change 00:10:34.542 00:10:34.542 Error Log 00:10:34.542 ========= 00:10:34.542 00:10:34.542 Arbitration 00:10:34.542 =========== 00:10:34.542 Arbitration Burst: no limit 00:10:34.542 00:10:34.542 Power Management 00:10:34.542 ================ 00:10:34.542 Number of Power States: 1 00:10:34.542 Current Power State: Power State #0 00:10:34.542 Power State #0: 00:10:34.542 Max Power: 25.00 W 00:10:34.542 Non-Operational State: Operational 00:10:34.542 Entry Latency: 16 microseconds 00:10:34.542 Exit Latency: 4 microseconds 00:10:34.542 Relative Read Throughput: 0 00:10:34.542 Relative Read Latency: 0 00:10:34.542 Relative Write Throughput: 0 00:10:34.542 Relative Write Latency: 0 00:10:34.542 Idle Power: Not Reported 00:10:34.542 Active Power: Not Reported 00:10:34.542 Non-Operational Permissive Mode: Not Supported 00:10:34.542 00:10:34.542 Health Information 00:10:34.542 ================== 00:10:34.542 Critical Warnings: 00:10:34.542 Available Spare Space: OK 00:10:34.542 Temperature: OK 00:10:34.542 Device Reliability: OK 00:10:34.542 Read Only: No 00:10:34.542 Volatile Memory Backup: OK 00:10:34.542 Current Temperature: 323 Kelvin (50 Celsius) 00:10:34.542 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:34.542 Available Spare: 0% 00:10:34.542 Available Spare Threshold: 0% 00:10:34.542 Life Percentage Used: 0% 00:10:34.542 Data Units Read: 2094 00:10:34.542 Data Units Written: 1882 00:10:34.542 Host Read Commands: 99943 00:10:34.542 Host Write Commands: 98212 00:10:34.542 Controller Busy Time: 0 minutes 00:10:34.542 Power Cycles: 0 00:10:34.542 Power On Hours: 0 hours 00:10:34.542 Unsafe Shutdowns: 0 00:10:34.542 Unrecoverable Media Errors: 0 00:10:34.542 Lifetime Error Log Entries: 0 00:10:34.542 Warning Temperature Time: 0 minutes 00:10:34.542 Critical Temperature Time: 0 minutes 00:10:34.542 00:10:34.542 Number of Queues 00:10:34.542 ================ 00:10:34.542 Number of I/O Submission Queues: 64 00:10:34.542 Number of I/O Completion Queues: 64 00:10:34.542 00:10:34.542 ZNS Specific Controller Data 00:10:34.542 ============================ 00:10:34.542 Zone Append Size Limit: 0 00:10:34.542 00:10:34.542 00:10:34.542 Active Namespaces 00:10:34.542 ================= 00:10:34.542 Namespace ID:1 00:10:34.542 Error Recovery Timeout: Unlimited 00:10:34.542 Command Set Identifier: NVM (00h) 00:10:34.542 Deallocate: Supported 00:10:34.542 Deallocated/Unwritten Error: Supported 00:10:34.542 Deallocated Read Value: All 0x00 00:10:34.542 Deallocate in Write Zeroes: Not Supported 00:10:34.542 Deallocated Guard Field: 0xFFFF 00:10:34.542 Flush: Supported 00:10:34.542 Reservation: Not Supported 00:10:34.542 Namespace Sharing Capabilities: Private 00:10:34.542 Size (in LBAs): 1048576 (4GiB) 00:10:34.542 Capacity (in LBAs): 1048576 (4GiB) 00:10:34.542 Utilization (in LBAs): 1048576 (4GiB) 00:10:34.542 Thin Provisioning: Not Supported 00:10:34.542 Per-NS Atomic Units: No 00:10:34.542 Maximum Single Source Range Length: 128 00:10:34.542 Maximum Copy Length: 128 00:10:34.542 Maximum Source Range Count: 128 00:10:34.542 NGUID/EUI64 Never Reused: No 00:10:34.542 Namespace Write Protected: No 00:10:34.542 Number of LBA Formats: 8 00:10:34.542 Current LBA Format: LBA Format #04 00:10:34.542 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:34.542 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:34.542 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:34.542 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:34.542 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:34.542 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:34.542 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:34.542 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:34.542 00:10:34.542 NVM Specific Namespace Data 00:10:34.542 =========================== 00:10:34.542 Logical Block Storage Tag Mask: 0 00:10:34.542 Protection Information Capabilities: 00:10:34.542 16b Guard Protection Information Storage Tag Support: No 00:10:34.542 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:34.542 Storage Tag Check Read Support: No 00:10:34.542 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.542 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.542 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.542 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.542 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.542 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.542 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.542 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.542 Namespace ID:2 00:10:34.542 Error Recovery Timeout: Unlimited 00:10:34.542 Command Set Identifier: NVM (00h) 00:10:34.542 Deallocate: Supported 00:10:34.542 Deallocated/Unwritten Error: Supported 00:10:34.542 Deallocated Read Value: All 0x00 00:10:34.542 Deallocate in Write Zeroes: Not Supported 00:10:34.542 Deallocated Guard Field: 0xFFFF 00:10:34.542 Flush: Supported 00:10:34.542 Reservation: Not Supported 00:10:34.542 Namespace Sharing Capabilities: Private 00:10:34.542 Size (in LBAs): 1048576 (4GiB) 00:10:34.542 Capacity (in LBAs): 1048576 (4GiB) 00:10:34.542 Utilization (in LBAs): 1048576 (4GiB) 00:10:34.542 Thin Provisioning: Not Supported 00:10:34.542 Per-NS Atomic Units: No 00:10:34.542 Maximum Single Source Range Length: 128 00:10:34.542 Maximum Copy Length: 128 00:10:34.542 Maximum Source Range Count: 128 00:10:34.542 NGUID/EUI64 Never Reused: No 00:10:34.542 Namespace Write Protected: No 00:10:34.542 Number of LBA Formats: 8 00:10:34.542 Current LBA Format: LBA Format #04 00:10:34.542 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:34.542 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:34.542 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:34.542 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:34.542 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:34.542 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:34.542 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:34.542 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:34.542 00:10:34.542 NVM Specific Namespace Data 00:10:34.542 =========================== 00:10:34.542 Logical Block Storage Tag Mask: 0 00:10:34.542 Protection Information Capabilities: 00:10:34.542 16b Guard Protection Information Storage Tag Support: No 00:10:34.542 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:34.542 Storage Tag Check Read Support: No 00:10:34.542 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.542 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.542 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.542 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.542 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.542 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.542 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.542 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.542 Namespace ID:3 00:10:34.542 Error Recovery Timeout: Unlimited 00:10:34.542 Command Set Identifier: NVM (00h) 00:10:34.542 Deallocate: Supported 00:10:34.542 Deallocated/Unwritten Error: Supported 00:10:34.542 Deallocated Read Value: All 0x00 00:10:34.542 Deallocate in Write Zeroes: Not Supported 00:10:34.542 Deallocated Guard Field: 0xFFFF 00:10:34.542 Flush: Supported 00:10:34.542 Reservation: Not Supported 00:10:34.542 Namespace Sharing Capabilities: Private 00:10:34.542 Size (in LBAs): 1048576 (4GiB) 00:10:34.542 Capacity (in LBAs): 1048576 (4GiB) 00:10:34.543 Utilization (in LBAs): 1048576 (4GiB) 00:10:34.543 Thin Provisioning: Not Supported 00:10:34.543 Per-NS Atomic Units: No 00:10:34.543 Maximum Single Source Range Length: 128 00:10:34.543 Maximum Copy Length: 128 00:10:34.543 Maximum Source Range Count: 128 00:10:34.543 NGUID/EUI64 Never Reused: No 00:10:34.543 Namespace Write Protected: No 00:10:34.543 Number of LBA Formats: 8 00:10:34.543 Current LBA Format: LBA Format #04 00:10:34.543 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:34.543 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:34.543 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:34.543 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:34.543 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:34.543 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:34.543 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:34.543 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:34.543 00:10:34.543 NVM Specific Namespace Data 00:10:34.543 =========================== 00:10:34.543 Logical Block Storage Tag Mask: 0 00:10:34.543 Protection Information Capabilities: 00:10:34.543 16b Guard Protection Information Storage Tag Support: No 00:10:34.543 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:34.543 Storage Tag Check Read Support: No 00:10:34.543 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.543 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.543 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.543 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.543 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.543 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.543 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.543 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.543 18:12:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:34.543 18:12:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:34.801 ===================================================== 00:10:34.801 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:34.801 ===================================================== 00:10:34.801 Controller Capabilities/Features 00:10:34.801 ================================ 00:10:34.801 Vendor ID: 1b36 00:10:34.801 Subsystem Vendor ID: 1af4 00:10:34.801 Serial Number: 12340 00:10:34.801 Model Number: QEMU NVMe Ctrl 00:10:34.801 Firmware Version: 8.0.0 00:10:34.801 Recommended Arb Burst: 6 00:10:34.801 IEEE OUI Identifier: 00 54 52 00:10:34.801 Multi-path I/O 00:10:34.801 May have multiple subsystem ports: No 00:10:34.801 May have multiple controllers: No 00:10:34.801 Associated with SR-IOV VF: No 00:10:34.801 Max Data Transfer Size: 524288 00:10:34.801 Max Number of Namespaces: 256 00:10:34.801 Max Number of I/O Queues: 64 00:10:34.801 NVMe Specification Version (VS): 1.4 00:10:34.801 NVMe Specification Version (Identify): 1.4 00:10:34.801 Maximum Queue Entries: 2048 00:10:34.801 Contiguous Queues Required: Yes 00:10:34.801 Arbitration Mechanisms Supported 00:10:34.801 Weighted Round Robin: Not Supported 00:10:34.801 Vendor Specific: Not Supported 00:10:34.801 Reset Timeout: 7500 ms 00:10:34.801 Doorbell Stride: 4 bytes 00:10:34.801 NVM Subsystem Reset: Not Supported 00:10:34.801 Command Sets Supported 00:10:34.801 NVM Command Set: Supported 00:10:34.801 Boot Partition: Not Supported 00:10:34.801 Memory Page Size Minimum: 4096 bytes 00:10:34.801 Memory Page Size Maximum: 65536 bytes 00:10:34.801 Persistent Memory Region: Not Supported 00:10:34.801 Optional Asynchronous Events Supported 00:10:34.801 Namespace Attribute Notices: Supported 00:10:34.801 Firmware Activation Notices: Not Supported 00:10:34.801 ANA Change Notices: Not Supported 00:10:34.801 PLE Aggregate Log Change Notices: Not Supported 00:10:34.801 LBA Status Info Alert Notices: Not Supported 00:10:34.801 EGE Aggregate Log Change Notices: Not Supported 00:10:34.801 Normal NVM Subsystem Shutdown event: Not Supported 00:10:34.801 Zone Descriptor Change Notices: Not Supported 00:10:34.801 Discovery Log Change Notices: Not Supported 00:10:34.801 Controller Attributes 00:10:34.801 128-bit Host Identifier: Not Supported 00:10:34.801 Non-Operational Permissive Mode: Not Supported 00:10:34.801 NVM Sets: Not Supported 00:10:34.801 Read Recovery Levels: Not Supported 00:10:34.801 Endurance Groups: Not Supported 00:10:34.801 Predictable Latency Mode: Not Supported 00:10:34.801 Traffic Based Keep ALive: Not Supported 00:10:34.801 Namespace Granularity: Not Supported 00:10:34.801 SQ Associations: Not Supported 00:10:34.801 UUID List: Not Supported 00:10:34.801 Multi-Domain Subsystem: Not Supported 00:10:34.801 Fixed Capacity Management: Not Supported 00:10:34.801 Variable Capacity Management: Not Supported 00:10:34.801 Delete Endurance Group: Not Supported 00:10:34.801 Delete NVM Set: Not Supported 00:10:34.801 Extended LBA Formats Supported: Supported 00:10:34.801 Flexible Data Placement Supported: Not Supported 00:10:34.801 00:10:34.801 Controller Memory Buffer Support 00:10:34.801 ================================ 00:10:34.801 Supported: No 00:10:34.801 00:10:34.801 Persistent Memory Region Support 00:10:34.801 ================================ 00:10:34.801 Supported: No 00:10:34.801 00:10:34.801 Admin Command Set Attributes 00:10:34.801 ============================ 00:10:34.801 Security Send/Receive: Not Supported 00:10:34.801 Format NVM: Supported 00:10:34.801 Firmware Activate/Download: Not Supported 00:10:34.801 Namespace Management: Supported 00:10:34.801 Device Self-Test: Not Supported 00:10:34.801 Directives: Supported 00:10:34.801 NVMe-MI: Not Supported 00:10:34.801 Virtualization Management: Not Supported 00:10:34.801 Doorbell Buffer Config: Supported 00:10:34.801 Get LBA Status Capability: Not Supported 00:10:34.801 Command & Feature Lockdown Capability: Not Supported 00:10:34.801 Abort Command Limit: 4 00:10:34.801 Async Event Request Limit: 4 00:10:34.801 Number of Firmware Slots: N/A 00:10:34.801 Firmware Slot 1 Read-Only: N/A 00:10:34.801 Firmware Activation Without Reset: N/A 00:10:34.801 Multiple Update Detection Support: N/A 00:10:34.801 Firmware Update Granularity: No Information Provided 00:10:34.801 Per-Namespace SMART Log: Yes 00:10:34.801 Asymmetric Namespace Access Log Page: Not Supported 00:10:34.801 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:34.801 Command Effects Log Page: Supported 00:10:34.801 Get Log Page Extended Data: Supported 00:10:34.801 Telemetry Log Pages: Not Supported 00:10:34.801 Persistent Event Log Pages: Not Supported 00:10:34.801 Supported Log Pages Log Page: May Support 00:10:34.801 Commands Supported & Effects Log Page: Not Supported 00:10:34.801 Feature Identifiers & Effects Log Page:May Support 00:10:34.802 NVMe-MI Commands & Effects Log Page: May Support 00:10:34.802 Data Area 4 for Telemetry Log: Not Supported 00:10:34.802 Error Log Page Entries Supported: 1 00:10:34.802 Keep Alive: Not Supported 00:10:34.802 00:10:34.802 NVM Command Set Attributes 00:10:34.802 ========================== 00:10:34.802 Submission Queue Entry Size 00:10:34.802 Max: 64 00:10:34.802 Min: 64 00:10:34.802 Completion Queue Entry Size 00:10:34.802 Max: 16 00:10:34.802 Min: 16 00:10:34.802 Number of Namespaces: 256 00:10:34.802 Compare Command: Supported 00:10:34.802 Write Uncorrectable Command: Not Supported 00:10:34.802 Dataset Management Command: Supported 00:10:34.802 Write Zeroes Command: Supported 00:10:34.802 Set Features Save Field: Supported 00:10:34.802 Reservations: Not Supported 00:10:34.802 Timestamp: Supported 00:10:34.802 Copy: Supported 00:10:34.802 Volatile Write Cache: Present 00:10:34.802 Atomic Write Unit (Normal): 1 00:10:34.802 Atomic Write Unit (PFail): 1 00:10:34.802 Atomic Compare & Write Unit: 1 00:10:34.802 Fused Compare & Write: Not Supported 00:10:34.802 Scatter-Gather List 00:10:34.802 SGL Command Set: Supported 00:10:34.802 SGL Keyed: Not Supported 00:10:34.802 SGL Bit Bucket Descriptor: Not Supported 00:10:34.802 SGL Metadata Pointer: Not Supported 00:10:34.802 Oversized SGL: Not Supported 00:10:34.802 SGL Metadata Address: Not Supported 00:10:34.802 SGL Offset: Not Supported 00:10:34.802 Transport SGL Data Block: Not Supported 00:10:34.802 Replay Protected Memory Block: Not Supported 00:10:34.802 00:10:34.802 Firmware Slot Information 00:10:34.802 ========================= 00:10:34.802 Active slot: 1 00:10:34.802 Slot 1 Firmware Revision: 1.0 00:10:34.802 00:10:34.802 00:10:34.802 Commands Supported and Effects 00:10:34.802 ============================== 00:10:34.802 Admin Commands 00:10:34.802 -------------- 00:10:34.802 Delete I/O Submission Queue (00h): Supported 00:10:34.802 Create I/O Submission Queue (01h): Supported 00:10:34.802 Get Log Page (02h): Supported 00:10:34.802 Delete I/O Completion Queue (04h): Supported 00:10:34.802 Create I/O Completion Queue (05h): Supported 00:10:34.802 Identify (06h): Supported 00:10:34.802 Abort (08h): Supported 00:10:34.802 Set Features (09h): Supported 00:10:34.802 Get Features (0Ah): Supported 00:10:34.802 Asynchronous Event Request (0Ch): Supported 00:10:34.802 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:34.802 Directive Send (19h): Supported 00:10:34.802 Directive Receive (1Ah): Supported 00:10:34.802 Virtualization Management (1Ch): Supported 00:10:34.802 Doorbell Buffer Config (7Ch): Supported 00:10:34.802 Format NVM (80h): Supported LBA-Change 00:10:34.802 I/O Commands 00:10:34.802 ------------ 00:10:34.802 Flush (00h): Supported LBA-Change 00:10:34.802 Write (01h): Supported LBA-Change 00:10:34.802 Read (02h): Supported 00:10:34.802 Compare (05h): Supported 00:10:34.802 Write Zeroes (08h): Supported LBA-Change 00:10:34.802 Dataset Management (09h): Supported LBA-Change 00:10:34.802 Unknown (0Ch): Supported 00:10:34.802 Unknown (12h): Supported 00:10:34.802 Copy (19h): Supported LBA-Change 00:10:34.802 Unknown (1Dh): Supported LBA-Change 00:10:34.802 00:10:34.802 Error Log 00:10:34.802 ========= 00:10:34.802 00:10:34.802 Arbitration 00:10:34.802 =========== 00:10:34.802 Arbitration Burst: no limit 00:10:34.802 00:10:34.802 Power Management 00:10:34.802 ================ 00:10:34.802 Number of Power States: 1 00:10:34.802 Current Power State: Power State #0 00:10:34.802 Power State #0: 00:10:34.802 Max Power: 25.00 W 00:10:34.802 Non-Operational State: Operational 00:10:34.802 Entry Latency: 16 microseconds 00:10:34.802 Exit Latency: 4 microseconds 00:10:34.802 Relative Read Throughput: 0 00:10:34.802 Relative Read Latency: 0 00:10:34.802 Relative Write Throughput: 0 00:10:34.802 Relative Write Latency: 0 00:10:34.802 Idle Power: Not Reported 00:10:34.802 Active Power: Not Reported 00:10:34.802 Non-Operational Permissive Mode: Not Supported 00:10:34.802 00:10:34.802 Health Information 00:10:34.802 ================== 00:10:34.802 Critical Warnings: 00:10:34.802 Available Spare Space: OK 00:10:34.802 Temperature: OK 00:10:34.802 Device Reliability: OK 00:10:34.802 Read Only: No 00:10:34.802 Volatile Memory Backup: OK 00:10:34.802 Current Temperature: 323 Kelvin (50 Celsius) 00:10:34.802 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:34.802 Available Spare: 0% 00:10:34.802 Available Spare Threshold: 0% 00:10:34.802 Life Percentage Used: 0% 00:10:34.802 Data Units Read: 668 00:10:34.802 Data Units Written: 596 00:10:34.802 Host Read Commands: 32897 00:10:34.802 Host Write Commands: 32683 00:10:34.802 Controller Busy Time: 0 minutes 00:10:34.802 Power Cycles: 0 00:10:34.802 Power On Hours: 0 hours 00:10:34.802 Unsafe Shutdowns: 0 00:10:34.802 Unrecoverable Media Errors: 0 00:10:34.802 Lifetime Error Log Entries: 0 00:10:34.802 Warning Temperature Time: 0 minutes 00:10:34.802 Critical Temperature Time: 0 minutes 00:10:34.802 00:10:34.802 Number of Queues 00:10:34.802 ================ 00:10:34.802 Number of I/O Submission Queues: 64 00:10:34.802 Number of I/O Completion Queues: 64 00:10:34.802 00:10:34.802 ZNS Specific Controller Data 00:10:34.802 ============================ 00:10:34.802 Zone Append Size Limit: 0 00:10:34.802 00:10:34.802 00:10:34.802 Active Namespaces 00:10:34.802 ================= 00:10:34.802 Namespace ID:1 00:10:34.802 Error Recovery Timeout: Unlimited 00:10:34.802 Command Set Identifier: NVM (00h) 00:10:34.802 Deallocate: Supported 00:10:34.802 Deallocated/Unwritten Error: Supported 00:10:34.802 Deallocated Read Value: All 0x00 00:10:34.802 Deallocate in Write Zeroes: Not Supported 00:10:34.802 Deallocated Guard Field: 0xFFFF 00:10:34.802 Flush: Supported 00:10:34.802 Reservation: Not Supported 00:10:34.802 Metadata Transferred as: Separate Metadata Buffer 00:10:34.802 Namespace Sharing Capabilities: Private 00:10:34.802 Size (in LBAs): 1548666 (5GiB) 00:10:34.802 Capacity (in LBAs): 1548666 (5GiB) 00:10:34.802 Utilization (in LBAs): 1548666 (5GiB) 00:10:34.802 Thin Provisioning: Not Supported 00:10:34.802 Per-NS Atomic Units: No 00:10:34.802 Maximum Single Source Range Length: 128 00:10:34.802 Maximum Copy Length: 128 00:10:34.802 Maximum Source Range Count: 128 00:10:34.802 NGUID/EUI64 Never Reused: No 00:10:34.802 Namespace Write Protected: No 00:10:34.802 Number of LBA Formats: 8 00:10:34.802 Current LBA Format: LBA Format #07 00:10:34.802 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:34.802 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:34.802 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:34.802 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:34.802 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:34.802 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:34.802 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:34.802 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:34.802 00:10:34.802 NVM Specific Namespace Data 00:10:34.802 =========================== 00:10:34.802 Logical Block Storage Tag Mask: 0 00:10:34.802 Protection Information Capabilities: 00:10:34.802 16b Guard Protection Information Storage Tag Support: No 00:10:34.802 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:34.802 Storage Tag Check Read Support: No 00:10:34.802 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.802 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.802 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.802 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.802 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.802 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.802 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.802 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:34.802 18:12:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:34.802 18:12:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:35.368 ===================================================== 00:10:35.368 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:35.368 ===================================================== 00:10:35.368 Controller Capabilities/Features 00:10:35.368 ================================ 00:10:35.369 Vendor ID: 1b36 00:10:35.369 Subsystem Vendor ID: 1af4 00:10:35.369 Serial Number: 12341 00:10:35.369 Model Number: QEMU NVMe Ctrl 00:10:35.369 Firmware Version: 8.0.0 00:10:35.369 Recommended Arb Burst: 6 00:10:35.369 IEEE OUI Identifier: 00 54 52 00:10:35.369 Multi-path I/O 00:10:35.369 May have multiple subsystem ports: No 00:10:35.369 May have multiple controllers: No 00:10:35.369 Associated with SR-IOV VF: No 00:10:35.369 Max Data Transfer Size: 524288 00:10:35.369 Max Number of Namespaces: 256 00:10:35.369 Max Number of I/O Queues: 64 00:10:35.369 NVMe Specification Version (VS): 1.4 00:10:35.369 NVMe Specification Version (Identify): 1.4 00:10:35.369 Maximum Queue Entries: 2048 00:10:35.369 Contiguous Queues Required: Yes 00:10:35.369 Arbitration Mechanisms Supported 00:10:35.369 Weighted Round Robin: Not Supported 00:10:35.369 Vendor Specific: Not Supported 00:10:35.369 Reset Timeout: 7500 ms 00:10:35.369 Doorbell Stride: 4 bytes 00:10:35.369 NVM Subsystem Reset: Not Supported 00:10:35.369 Command Sets Supported 00:10:35.369 NVM Command Set: Supported 00:10:35.369 Boot Partition: Not Supported 00:10:35.369 Memory Page Size Minimum: 4096 bytes 00:10:35.369 Memory Page Size Maximum: 65536 bytes 00:10:35.369 Persistent Memory Region: Not Supported 00:10:35.369 Optional Asynchronous Events Supported 00:10:35.369 Namespace Attribute Notices: Supported 00:10:35.369 Firmware Activation Notices: Not Supported 00:10:35.369 ANA Change Notices: Not Supported 00:10:35.369 PLE Aggregate Log Change Notices: Not Supported 00:10:35.369 LBA Status Info Alert Notices: Not Supported 00:10:35.369 EGE Aggregate Log Change Notices: Not Supported 00:10:35.369 Normal NVM Subsystem Shutdown event: Not Supported 00:10:35.369 Zone Descriptor Change Notices: Not Supported 00:10:35.369 Discovery Log Change Notices: Not Supported 00:10:35.369 Controller Attributes 00:10:35.369 128-bit Host Identifier: Not Supported 00:10:35.369 Non-Operational Permissive Mode: Not Supported 00:10:35.369 NVM Sets: Not Supported 00:10:35.369 Read Recovery Levels: Not Supported 00:10:35.369 Endurance Groups: Not Supported 00:10:35.369 Predictable Latency Mode: Not Supported 00:10:35.369 Traffic Based Keep ALive: Not Supported 00:10:35.369 Namespace Granularity: Not Supported 00:10:35.369 SQ Associations: Not Supported 00:10:35.369 UUID List: Not Supported 00:10:35.369 Multi-Domain Subsystem: Not Supported 00:10:35.369 Fixed Capacity Management: Not Supported 00:10:35.369 Variable Capacity Management: Not Supported 00:10:35.369 Delete Endurance Group: Not Supported 00:10:35.369 Delete NVM Set: Not Supported 00:10:35.369 Extended LBA Formats Supported: Supported 00:10:35.369 Flexible Data Placement Supported: Not Supported 00:10:35.369 00:10:35.369 Controller Memory Buffer Support 00:10:35.369 ================================ 00:10:35.369 Supported: No 00:10:35.369 00:10:35.369 Persistent Memory Region Support 00:10:35.369 ================================ 00:10:35.369 Supported: No 00:10:35.369 00:10:35.369 Admin Command Set Attributes 00:10:35.369 ============================ 00:10:35.369 Security Send/Receive: Not Supported 00:10:35.369 Format NVM: Supported 00:10:35.369 Firmware Activate/Download: Not Supported 00:10:35.369 Namespace Management: Supported 00:10:35.369 Device Self-Test: Not Supported 00:10:35.369 Directives: Supported 00:10:35.369 NVMe-MI: Not Supported 00:10:35.369 Virtualization Management: Not Supported 00:10:35.369 Doorbell Buffer Config: Supported 00:10:35.369 Get LBA Status Capability: Not Supported 00:10:35.369 Command & Feature Lockdown Capability: Not Supported 00:10:35.369 Abort Command Limit: 4 00:10:35.369 Async Event Request Limit: 4 00:10:35.369 Number of Firmware Slots: N/A 00:10:35.369 Firmware Slot 1 Read-Only: N/A 00:10:35.369 Firmware Activation Without Reset: N/A 00:10:35.369 Multiple Update Detection Support: N/A 00:10:35.369 Firmware Update Granularity: No Information Provided 00:10:35.369 Per-Namespace SMART Log: Yes 00:10:35.369 Asymmetric Namespace Access Log Page: Not Supported 00:10:35.369 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:35.369 Command Effects Log Page: Supported 00:10:35.369 Get Log Page Extended Data: Supported 00:10:35.369 Telemetry Log Pages: Not Supported 00:10:35.369 Persistent Event Log Pages: Not Supported 00:10:35.369 Supported Log Pages Log Page: May Support 00:10:35.369 Commands Supported & Effects Log Page: Not Supported 00:10:35.369 Feature Identifiers & Effects Log Page:May Support 00:10:35.369 NVMe-MI Commands & Effects Log Page: May Support 00:10:35.369 Data Area 4 for Telemetry Log: Not Supported 00:10:35.369 Error Log Page Entries Supported: 1 00:10:35.369 Keep Alive: Not Supported 00:10:35.369 00:10:35.369 NVM Command Set Attributes 00:10:35.369 ========================== 00:10:35.369 Submission Queue Entry Size 00:10:35.369 Max: 64 00:10:35.369 Min: 64 00:10:35.369 Completion Queue Entry Size 00:10:35.369 Max: 16 00:10:35.369 Min: 16 00:10:35.369 Number of Namespaces: 256 00:10:35.369 Compare Command: Supported 00:10:35.369 Write Uncorrectable Command: Not Supported 00:10:35.369 Dataset Management Command: Supported 00:10:35.369 Write Zeroes Command: Supported 00:10:35.369 Set Features Save Field: Supported 00:10:35.369 Reservations: Not Supported 00:10:35.369 Timestamp: Supported 00:10:35.369 Copy: Supported 00:10:35.369 Volatile Write Cache: Present 00:10:35.369 Atomic Write Unit (Normal): 1 00:10:35.369 Atomic Write Unit (PFail): 1 00:10:35.369 Atomic Compare & Write Unit: 1 00:10:35.369 Fused Compare & Write: Not Supported 00:10:35.369 Scatter-Gather List 00:10:35.369 SGL Command Set: Supported 00:10:35.369 SGL Keyed: Not Supported 00:10:35.369 SGL Bit Bucket Descriptor: Not Supported 00:10:35.369 SGL Metadata Pointer: Not Supported 00:10:35.369 Oversized SGL: Not Supported 00:10:35.369 SGL Metadata Address: Not Supported 00:10:35.369 SGL Offset: Not Supported 00:10:35.369 Transport SGL Data Block: Not Supported 00:10:35.369 Replay Protected Memory Block: Not Supported 00:10:35.369 00:10:35.369 Firmware Slot Information 00:10:35.369 ========================= 00:10:35.369 Active slot: 1 00:10:35.369 Slot 1 Firmware Revision: 1.0 00:10:35.369 00:10:35.369 00:10:35.369 Commands Supported and Effects 00:10:35.369 ============================== 00:10:35.369 Admin Commands 00:10:35.369 -------------- 00:10:35.369 Delete I/O Submission Queue (00h): Supported 00:10:35.369 Create I/O Submission Queue (01h): Supported 00:10:35.369 Get Log Page (02h): Supported 00:10:35.369 Delete I/O Completion Queue (04h): Supported 00:10:35.369 Create I/O Completion Queue (05h): Supported 00:10:35.369 Identify (06h): Supported 00:10:35.369 Abort (08h): Supported 00:10:35.369 Set Features (09h): Supported 00:10:35.369 Get Features (0Ah): Supported 00:10:35.369 Asynchronous Event Request (0Ch): Supported 00:10:35.369 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:35.369 Directive Send (19h): Supported 00:10:35.369 Directive Receive (1Ah): Supported 00:10:35.369 Virtualization Management (1Ch): Supported 00:10:35.369 Doorbell Buffer Config (7Ch): Supported 00:10:35.369 Format NVM (80h): Supported LBA-Change 00:10:35.369 I/O Commands 00:10:35.369 ------------ 00:10:35.369 Flush (00h): Supported LBA-Change 00:10:35.369 Write (01h): Supported LBA-Change 00:10:35.369 Read (02h): Supported 00:10:35.369 Compare (05h): Supported 00:10:35.369 Write Zeroes (08h): Supported LBA-Change 00:10:35.369 Dataset Management (09h): Supported LBA-Change 00:10:35.369 Unknown (0Ch): Supported 00:10:35.369 Unknown (12h): Supported 00:10:35.369 Copy (19h): Supported LBA-Change 00:10:35.369 Unknown (1Dh): Supported LBA-Change 00:10:35.369 00:10:35.369 Error Log 00:10:35.369 ========= 00:10:35.369 00:10:35.369 Arbitration 00:10:35.369 =========== 00:10:35.369 Arbitration Burst: no limit 00:10:35.369 00:10:35.369 Power Management 00:10:35.369 ================ 00:10:35.369 Number of Power States: 1 00:10:35.369 Current Power State: Power State #0 00:10:35.369 Power State #0: 00:10:35.369 Max Power: 25.00 W 00:10:35.369 Non-Operational State: Operational 00:10:35.369 Entry Latency: 16 microseconds 00:10:35.369 Exit Latency: 4 microseconds 00:10:35.369 Relative Read Throughput: 0 00:10:35.369 Relative Read Latency: 0 00:10:35.369 Relative Write Throughput: 0 00:10:35.369 Relative Write Latency: 0 00:10:35.369 Idle Power: Not Reported 00:10:35.369 Active Power: Not Reported 00:10:35.369 Non-Operational Permissive Mode: Not Supported 00:10:35.369 00:10:35.369 Health Information 00:10:35.369 ================== 00:10:35.369 Critical Warnings: 00:10:35.370 Available Spare Space: OK 00:10:35.370 Temperature: OK 00:10:35.370 Device Reliability: OK 00:10:35.370 Read Only: No 00:10:35.370 Volatile Memory Backup: OK 00:10:35.370 Current Temperature: 323 Kelvin (50 Celsius) 00:10:35.370 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:35.370 Available Spare: 0% 00:10:35.370 Available Spare Threshold: 0% 00:10:35.370 Life Percentage Used: 0% 00:10:35.370 Data Units Read: 999 00:10:35.370 Data Units Written: 866 00:10:35.370 Host Read Commands: 48666 00:10:35.370 Host Write Commands: 47456 00:10:35.370 Controller Busy Time: 0 minutes 00:10:35.370 Power Cycles: 0 00:10:35.370 Power On Hours: 0 hours 00:10:35.370 Unsafe Shutdowns: 0 00:10:35.370 Unrecoverable Media Errors: 0 00:10:35.370 Lifetime Error Log Entries: 0 00:10:35.370 Warning Temperature Time: 0 minutes 00:10:35.370 Critical Temperature Time: 0 minutes 00:10:35.370 00:10:35.370 Number of Queues 00:10:35.370 ================ 00:10:35.370 Number of I/O Submission Queues: 64 00:10:35.370 Number of I/O Completion Queues: 64 00:10:35.370 00:10:35.370 ZNS Specific Controller Data 00:10:35.370 ============================ 00:10:35.370 Zone Append Size Limit: 0 00:10:35.370 00:10:35.370 00:10:35.370 Active Namespaces 00:10:35.370 ================= 00:10:35.370 Namespace ID:1 00:10:35.370 Error Recovery Timeout: Unlimited 00:10:35.370 Command Set Identifier: NVM (00h) 00:10:35.370 Deallocate: Supported 00:10:35.370 Deallocated/Unwritten Error: Supported 00:10:35.370 Deallocated Read Value: All 0x00 00:10:35.370 Deallocate in Write Zeroes: Not Supported 00:10:35.370 Deallocated Guard Field: 0xFFFF 00:10:35.370 Flush: Supported 00:10:35.370 Reservation: Not Supported 00:10:35.370 Namespace Sharing Capabilities: Private 00:10:35.370 Size (in LBAs): 1310720 (5GiB) 00:10:35.370 Capacity (in LBAs): 1310720 (5GiB) 00:10:35.370 Utilization (in LBAs): 1310720 (5GiB) 00:10:35.370 Thin Provisioning: Not Supported 00:10:35.370 Per-NS Atomic Units: No 00:10:35.370 Maximum Single Source Range Length: 128 00:10:35.370 Maximum Copy Length: 128 00:10:35.370 Maximum Source Range Count: 128 00:10:35.370 NGUID/EUI64 Never Reused: No 00:10:35.370 Namespace Write Protected: No 00:10:35.370 Number of LBA Formats: 8 00:10:35.370 Current LBA Format: LBA Format #04 00:10:35.370 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:35.370 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:35.370 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:35.370 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:35.370 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:35.370 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:35.370 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:35.370 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:35.370 00:10:35.370 NVM Specific Namespace Data 00:10:35.370 =========================== 00:10:35.370 Logical Block Storage Tag Mask: 0 00:10:35.370 Protection Information Capabilities: 00:10:35.370 16b Guard Protection Information Storage Tag Support: No 00:10:35.370 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:35.370 Storage Tag Check Read Support: No 00:10:35.370 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.370 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.370 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.370 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.370 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.370 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.370 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.370 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.370 18:12:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:35.370 18:12:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:35.629 ===================================================== 00:10:35.629 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:35.629 ===================================================== 00:10:35.629 Controller Capabilities/Features 00:10:35.629 ================================ 00:10:35.629 Vendor ID: 1b36 00:10:35.629 Subsystem Vendor ID: 1af4 00:10:35.629 Serial Number: 12342 00:10:35.629 Model Number: QEMU NVMe Ctrl 00:10:35.629 Firmware Version: 8.0.0 00:10:35.629 Recommended Arb Burst: 6 00:10:35.629 IEEE OUI Identifier: 00 54 52 00:10:35.629 Multi-path I/O 00:10:35.629 May have multiple subsystem ports: No 00:10:35.629 May have multiple controllers: No 00:10:35.629 Associated with SR-IOV VF: No 00:10:35.629 Max Data Transfer Size: 524288 00:10:35.629 Max Number of Namespaces: 256 00:10:35.629 Max Number of I/O Queues: 64 00:10:35.629 NVMe Specification Version (VS): 1.4 00:10:35.629 NVMe Specification Version (Identify): 1.4 00:10:35.629 Maximum Queue Entries: 2048 00:10:35.629 Contiguous Queues Required: Yes 00:10:35.629 Arbitration Mechanisms Supported 00:10:35.629 Weighted Round Robin: Not Supported 00:10:35.629 Vendor Specific: Not Supported 00:10:35.629 Reset Timeout: 7500 ms 00:10:35.629 Doorbell Stride: 4 bytes 00:10:35.629 NVM Subsystem Reset: Not Supported 00:10:35.629 Command Sets Supported 00:10:35.629 NVM Command Set: Supported 00:10:35.629 Boot Partition: Not Supported 00:10:35.629 Memory Page Size Minimum: 4096 bytes 00:10:35.629 Memory Page Size Maximum: 65536 bytes 00:10:35.629 Persistent Memory Region: Not Supported 00:10:35.629 Optional Asynchronous Events Supported 00:10:35.629 Namespace Attribute Notices: Supported 00:10:35.629 Firmware Activation Notices: Not Supported 00:10:35.629 ANA Change Notices: Not Supported 00:10:35.629 PLE Aggregate Log Change Notices: Not Supported 00:10:35.629 LBA Status Info Alert Notices: Not Supported 00:10:35.629 EGE Aggregate Log Change Notices: Not Supported 00:10:35.629 Normal NVM Subsystem Shutdown event: Not Supported 00:10:35.629 Zone Descriptor Change Notices: Not Supported 00:10:35.629 Discovery Log Change Notices: Not Supported 00:10:35.629 Controller Attributes 00:10:35.629 128-bit Host Identifier: Not Supported 00:10:35.629 Non-Operational Permissive Mode: Not Supported 00:10:35.629 NVM Sets: Not Supported 00:10:35.629 Read Recovery Levels: Not Supported 00:10:35.629 Endurance Groups: Not Supported 00:10:35.629 Predictable Latency Mode: Not Supported 00:10:35.629 Traffic Based Keep ALive: Not Supported 00:10:35.629 Namespace Granularity: Not Supported 00:10:35.629 SQ Associations: Not Supported 00:10:35.629 UUID List: Not Supported 00:10:35.629 Multi-Domain Subsystem: Not Supported 00:10:35.629 Fixed Capacity Management: Not Supported 00:10:35.629 Variable Capacity Management: Not Supported 00:10:35.629 Delete Endurance Group: Not Supported 00:10:35.629 Delete NVM Set: Not Supported 00:10:35.629 Extended LBA Formats Supported: Supported 00:10:35.629 Flexible Data Placement Supported: Not Supported 00:10:35.629 00:10:35.629 Controller Memory Buffer Support 00:10:35.629 ================================ 00:10:35.629 Supported: No 00:10:35.629 00:10:35.629 Persistent Memory Region Support 00:10:35.629 ================================ 00:10:35.629 Supported: No 00:10:35.629 00:10:35.629 Admin Command Set Attributes 00:10:35.629 ============================ 00:10:35.629 Security Send/Receive: Not Supported 00:10:35.629 Format NVM: Supported 00:10:35.629 Firmware Activate/Download: Not Supported 00:10:35.629 Namespace Management: Supported 00:10:35.629 Device Self-Test: Not Supported 00:10:35.629 Directives: Supported 00:10:35.629 NVMe-MI: Not Supported 00:10:35.629 Virtualization Management: Not Supported 00:10:35.629 Doorbell Buffer Config: Supported 00:10:35.629 Get LBA Status Capability: Not Supported 00:10:35.629 Command & Feature Lockdown Capability: Not Supported 00:10:35.629 Abort Command Limit: 4 00:10:35.629 Async Event Request Limit: 4 00:10:35.629 Number of Firmware Slots: N/A 00:10:35.629 Firmware Slot 1 Read-Only: N/A 00:10:35.629 Firmware Activation Without Reset: N/A 00:10:35.629 Multiple Update Detection Support: N/A 00:10:35.629 Firmware Update Granularity: No Information Provided 00:10:35.629 Per-Namespace SMART Log: Yes 00:10:35.630 Asymmetric Namespace Access Log Page: Not Supported 00:10:35.630 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:35.630 Command Effects Log Page: Supported 00:10:35.630 Get Log Page Extended Data: Supported 00:10:35.630 Telemetry Log Pages: Not Supported 00:10:35.630 Persistent Event Log Pages: Not Supported 00:10:35.630 Supported Log Pages Log Page: May Support 00:10:35.630 Commands Supported & Effects Log Page: Not Supported 00:10:35.630 Feature Identifiers & Effects Log Page:May Support 00:10:35.630 NVMe-MI Commands & Effects Log Page: May Support 00:10:35.630 Data Area 4 for Telemetry Log: Not Supported 00:10:35.630 Error Log Page Entries Supported: 1 00:10:35.630 Keep Alive: Not Supported 00:10:35.630 00:10:35.630 NVM Command Set Attributes 00:10:35.630 ========================== 00:10:35.630 Submission Queue Entry Size 00:10:35.630 Max: 64 00:10:35.630 Min: 64 00:10:35.630 Completion Queue Entry Size 00:10:35.630 Max: 16 00:10:35.630 Min: 16 00:10:35.630 Number of Namespaces: 256 00:10:35.630 Compare Command: Supported 00:10:35.630 Write Uncorrectable Command: Not Supported 00:10:35.630 Dataset Management Command: Supported 00:10:35.630 Write Zeroes Command: Supported 00:10:35.630 Set Features Save Field: Supported 00:10:35.630 Reservations: Not Supported 00:10:35.630 Timestamp: Supported 00:10:35.630 Copy: Supported 00:10:35.630 Volatile Write Cache: Present 00:10:35.630 Atomic Write Unit (Normal): 1 00:10:35.630 Atomic Write Unit (PFail): 1 00:10:35.630 Atomic Compare & Write Unit: 1 00:10:35.630 Fused Compare & Write: Not Supported 00:10:35.630 Scatter-Gather List 00:10:35.630 SGL Command Set: Supported 00:10:35.630 SGL Keyed: Not Supported 00:10:35.630 SGL Bit Bucket Descriptor: Not Supported 00:10:35.630 SGL Metadata Pointer: Not Supported 00:10:35.630 Oversized SGL: Not Supported 00:10:35.630 SGL Metadata Address: Not Supported 00:10:35.630 SGL Offset: Not Supported 00:10:35.630 Transport SGL Data Block: Not Supported 00:10:35.630 Replay Protected Memory Block: Not Supported 00:10:35.630 00:10:35.630 Firmware Slot Information 00:10:35.630 ========================= 00:10:35.630 Active slot: 1 00:10:35.630 Slot 1 Firmware Revision: 1.0 00:10:35.630 00:10:35.630 00:10:35.630 Commands Supported and Effects 00:10:35.630 ============================== 00:10:35.630 Admin Commands 00:10:35.630 -------------- 00:10:35.630 Delete I/O Submission Queue (00h): Supported 00:10:35.630 Create I/O Submission Queue (01h): Supported 00:10:35.630 Get Log Page (02h): Supported 00:10:35.630 Delete I/O Completion Queue (04h): Supported 00:10:35.630 Create I/O Completion Queue (05h): Supported 00:10:35.630 Identify (06h): Supported 00:10:35.630 Abort (08h): Supported 00:10:35.630 Set Features (09h): Supported 00:10:35.630 Get Features (0Ah): Supported 00:10:35.630 Asynchronous Event Request (0Ch): Supported 00:10:35.630 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:35.630 Directive Send (19h): Supported 00:10:35.630 Directive Receive (1Ah): Supported 00:10:35.630 Virtualization Management (1Ch): Supported 00:10:35.630 Doorbell Buffer Config (7Ch): Supported 00:10:35.630 Format NVM (80h): Supported LBA-Change 00:10:35.630 I/O Commands 00:10:35.630 ------------ 00:10:35.630 Flush (00h): Supported LBA-Change 00:10:35.630 Write (01h): Supported LBA-Change 00:10:35.630 Read (02h): Supported 00:10:35.630 Compare (05h): Supported 00:10:35.630 Write Zeroes (08h): Supported LBA-Change 00:10:35.630 Dataset Management (09h): Supported LBA-Change 00:10:35.630 Unknown (0Ch): Supported 00:10:35.630 Unknown (12h): Supported 00:10:35.630 Copy (19h): Supported LBA-Change 00:10:35.630 Unknown (1Dh): Supported LBA-Change 00:10:35.630 00:10:35.630 Error Log 00:10:35.630 ========= 00:10:35.630 00:10:35.630 Arbitration 00:10:35.630 =========== 00:10:35.630 Arbitration Burst: no limit 00:10:35.630 00:10:35.630 Power Management 00:10:35.630 ================ 00:10:35.630 Number of Power States: 1 00:10:35.630 Current Power State: Power State #0 00:10:35.630 Power State #0: 00:10:35.630 Max Power: 25.00 W 00:10:35.630 Non-Operational State: Operational 00:10:35.630 Entry Latency: 16 microseconds 00:10:35.630 Exit Latency: 4 microseconds 00:10:35.630 Relative Read Throughput: 0 00:10:35.630 Relative Read Latency: 0 00:10:35.630 Relative Write Throughput: 0 00:10:35.630 Relative Write Latency: 0 00:10:35.630 Idle Power: Not Reported 00:10:35.630 Active Power: Not Reported 00:10:35.630 Non-Operational Permissive Mode: Not Supported 00:10:35.630 00:10:35.630 Health Information 00:10:35.630 ================== 00:10:35.630 Critical Warnings: 00:10:35.630 Available Spare Space: OK 00:10:35.630 Temperature: OK 00:10:35.630 Device Reliability: OK 00:10:35.630 Read Only: No 00:10:35.630 Volatile Memory Backup: OK 00:10:35.630 Current Temperature: 323 Kelvin (50 Celsius) 00:10:35.630 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:35.630 Available Spare: 0% 00:10:35.630 Available Spare Threshold: 0% 00:10:35.630 Life Percentage Used: 0% 00:10:35.630 Data Units Read: 2094 00:10:35.630 Data Units Written: 1882 00:10:35.630 Host Read Commands: 99943 00:10:35.630 Host Write Commands: 98212 00:10:35.630 Controller Busy Time: 0 minutes 00:10:35.630 Power Cycles: 0 00:10:35.630 Power On Hours: 0 hours 00:10:35.630 Unsafe Shutdowns: 0 00:10:35.630 Unrecoverable Media Errors: 0 00:10:35.630 Lifetime Error Log Entries: 0 00:10:35.630 Warning Temperature Time: 0 minutes 00:10:35.630 Critical Temperature Time: 0 minutes 00:10:35.630 00:10:35.630 Number of Queues 00:10:35.630 ================ 00:10:35.630 Number of I/O Submission Queues: 64 00:10:35.630 Number of I/O Completion Queues: 64 00:10:35.630 00:10:35.630 ZNS Specific Controller Data 00:10:35.630 ============================ 00:10:35.630 Zone Append Size Limit: 0 00:10:35.630 00:10:35.630 00:10:35.630 Active Namespaces 00:10:35.630 ================= 00:10:35.630 Namespace ID:1 00:10:35.630 Error Recovery Timeout: Unlimited 00:10:35.630 Command Set Identifier: NVM (00h) 00:10:35.630 Deallocate: Supported 00:10:35.630 Deallocated/Unwritten Error: Supported 00:10:35.630 Deallocated Read Value: All 0x00 00:10:35.630 Deallocate in Write Zeroes: Not Supported 00:10:35.630 Deallocated Guard Field: 0xFFFF 00:10:35.630 Flush: Supported 00:10:35.630 Reservation: Not Supported 00:10:35.630 Namespace Sharing Capabilities: Private 00:10:35.630 Size (in LBAs): 1048576 (4GiB) 00:10:35.630 Capacity (in LBAs): 1048576 (4GiB) 00:10:35.630 Utilization (in LBAs): 1048576 (4GiB) 00:10:35.630 Thin Provisioning: Not Supported 00:10:35.630 Per-NS Atomic Units: No 00:10:35.630 Maximum Single Source Range Length: 128 00:10:35.630 Maximum Copy Length: 128 00:10:35.630 Maximum Source Range Count: 128 00:10:35.630 NGUID/EUI64 Never Reused: No 00:10:35.630 Namespace Write Protected: No 00:10:35.630 Number of LBA Formats: 8 00:10:35.630 Current LBA Format: LBA Format #04 00:10:35.630 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:35.630 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:35.630 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:35.630 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:35.630 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:35.630 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:35.630 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:35.630 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:35.630 00:10:35.630 NVM Specific Namespace Data 00:10:35.630 =========================== 00:10:35.630 Logical Block Storage Tag Mask: 0 00:10:35.630 Protection Information Capabilities: 00:10:35.630 16b Guard Protection Information Storage Tag Support: No 00:10:35.630 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:35.630 Storage Tag Check Read Support: No 00:10:35.630 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.630 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.630 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.630 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.630 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.630 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.630 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.630 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.630 Namespace ID:2 00:10:35.630 Error Recovery Timeout: Unlimited 00:10:35.630 Command Set Identifier: NVM (00h) 00:10:35.630 Deallocate: Supported 00:10:35.630 Deallocated/Unwritten Error: Supported 00:10:35.630 Deallocated Read Value: All 0x00 00:10:35.630 Deallocate in Write Zeroes: Not Supported 00:10:35.630 Deallocated Guard Field: 0xFFFF 00:10:35.630 Flush: Supported 00:10:35.631 Reservation: Not Supported 00:10:35.631 Namespace Sharing Capabilities: Private 00:10:35.631 Size (in LBAs): 1048576 (4GiB) 00:10:35.631 Capacity (in LBAs): 1048576 (4GiB) 00:10:35.631 Utilization (in LBAs): 1048576 (4GiB) 00:10:35.631 Thin Provisioning: Not Supported 00:10:35.631 Per-NS Atomic Units: No 00:10:35.631 Maximum Single Source Range Length: 128 00:10:35.631 Maximum Copy Length: 128 00:10:35.631 Maximum Source Range Count: 128 00:10:35.631 NGUID/EUI64 Never Reused: No 00:10:35.631 Namespace Write Protected: No 00:10:35.631 Number of LBA Formats: 8 00:10:35.631 Current LBA Format: LBA Format #04 00:10:35.631 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:35.631 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:35.631 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:35.631 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:35.631 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:35.631 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:35.631 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:35.631 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:35.631 00:10:35.631 NVM Specific Namespace Data 00:10:35.631 =========================== 00:10:35.631 Logical Block Storage Tag Mask: 0 00:10:35.631 Protection Information Capabilities: 00:10:35.631 16b Guard Protection Information Storage Tag Support: No 00:10:35.631 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:35.631 Storage Tag Check Read Support: No 00:10:35.631 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.631 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.631 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.631 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.631 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.631 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.631 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.631 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.631 Namespace ID:3 00:10:35.631 Error Recovery Timeout: Unlimited 00:10:35.631 Command Set Identifier: NVM (00h) 00:10:35.631 Deallocate: Supported 00:10:35.631 Deallocated/Unwritten Error: Supported 00:10:35.631 Deallocated Read Value: All 0x00 00:10:35.631 Deallocate in Write Zeroes: Not Supported 00:10:35.631 Deallocated Guard Field: 0xFFFF 00:10:35.631 Flush: Supported 00:10:35.631 Reservation: Not Supported 00:10:35.631 Namespace Sharing Capabilities: Private 00:10:35.631 Size (in LBAs): 1048576 (4GiB) 00:10:35.631 Capacity (in LBAs): 1048576 (4GiB) 00:10:35.631 Utilization (in LBAs): 1048576 (4GiB) 00:10:35.631 Thin Provisioning: Not Supported 00:10:35.631 Per-NS Atomic Units: No 00:10:35.631 Maximum Single Source Range Length: 128 00:10:35.631 Maximum Copy Length: 128 00:10:35.631 Maximum Source Range Count: 128 00:10:35.631 NGUID/EUI64 Never Reused: No 00:10:35.631 Namespace Write Protected: No 00:10:35.631 Number of LBA Formats: 8 00:10:35.631 Current LBA Format: LBA Format #04 00:10:35.631 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:35.631 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:35.631 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:35.631 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:35.631 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:35.631 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:35.631 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:35.631 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:35.631 00:10:35.631 NVM Specific Namespace Data 00:10:35.631 =========================== 00:10:35.631 Logical Block Storage Tag Mask: 0 00:10:35.631 Protection Information Capabilities: 00:10:35.631 16b Guard Protection Information Storage Tag Support: No 00:10:35.631 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:35.631 Storage Tag Check Read Support: No 00:10:35.631 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.631 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.631 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.631 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.631 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.631 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.631 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.631 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:35.631 18:12:10 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:35.631 18:12:10 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:35.889 ===================================================== 00:10:35.889 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:35.889 ===================================================== 00:10:35.889 Controller Capabilities/Features 00:10:35.889 ================================ 00:10:35.889 Vendor ID: 1b36 00:10:35.889 Subsystem Vendor ID: 1af4 00:10:35.889 Serial Number: 12343 00:10:35.889 Model Number: QEMU NVMe Ctrl 00:10:35.890 Firmware Version: 8.0.0 00:10:35.890 Recommended Arb Burst: 6 00:10:35.890 IEEE OUI Identifier: 00 54 52 00:10:35.890 Multi-path I/O 00:10:35.890 May have multiple subsystem ports: No 00:10:35.890 May have multiple controllers: Yes 00:10:35.890 Associated with SR-IOV VF: No 00:10:35.890 Max Data Transfer Size: 524288 00:10:35.890 Max Number of Namespaces: 256 00:10:35.890 Max Number of I/O Queues: 64 00:10:35.890 NVMe Specification Version (VS): 1.4 00:10:35.890 NVMe Specification Version (Identify): 1.4 00:10:35.890 Maximum Queue Entries: 2048 00:10:35.890 Contiguous Queues Required: Yes 00:10:35.890 Arbitration Mechanisms Supported 00:10:35.890 Weighted Round Robin: Not Supported 00:10:35.890 Vendor Specific: Not Supported 00:10:35.890 Reset Timeout: 7500 ms 00:10:35.890 Doorbell Stride: 4 bytes 00:10:35.890 NVM Subsystem Reset: Not Supported 00:10:35.890 Command Sets Supported 00:10:35.890 NVM Command Set: Supported 00:10:35.890 Boot Partition: Not Supported 00:10:35.890 Memory Page Size Minimum: 4096 bytes 00:10:35.890 Memory Page Size Maximum: 65536 bytes 00:10:35.890 Persistent Memory Region: Not Supported 00:10:35.890 Optional Asynchronous Events Supported 00:10:35.890 Namespace Attribute Notices: Supported 00:10:35.890 Firmware Activation Notices: Not Supported 00:10:35.890 ANA Change Notices: Not Supported 00:10:35.890 PLE Aggregate Log Change Notices: Not Supported 00:10:35.890 LBA Status Info Alert Notices: Not Supported 00:10:35.890 EGE Aggregate Log Change Notices: Not Supported 00:10:35.890 Normal NVM Subsystem Shutdown event: Not Supported 00:10:35.890 Zone Descriptor Change Notices: Not Supported 00:10:35.890 Discovery Log Change Notices: Not Supported 00:10:35.890 Controller Attributes 00:10:35.890 128-bit Host Identifier: Not Supported 00:10:35.890 Non-Operational Permissive Mode: Not Supported 00:10:35.890 NVM Sets: Not Supported 00:10:35.890 Read Recovery Levels: Not Supported 00:10:35.890 Endurance Groups: Supported 00:10:35.890 Predictable Latency Mode: Not Supported 00:10:35.890 Traffic Based Keep ALive: Not Supported 00:10:35.890 Namespace Granularity: Not Supported 00:10:35.890 SQ Associations: Not Supported 00:10:35.890 UUID List: Not Supported 00:10:35.890 Multi-Domain Subsystem: Not Supported 00:10:35.890 Fixed Capacity Management: Not Supported 00:10:35.890 Variable Capacity Management: Not Supported 00:10:35.890 Delete Endurance Group: Not Supported 00:10:35.890 Delete NVM Set: Not Supported 00:10:35.890 Extended LBA Formats Supported: Supported 00:10:35.890 Flexible Data Placement Supported: Supported 00:10:35.890 00:10:35.890 Controller Memory Buffer Support 00:10:35.890 ================================ 00:10:35.890 Supported: No 00:10:35.890 00:10:35.890 Persistent Memory Region Support 00:10:35.890 ================================ 00:10:35.890 Supported: No 00:10:35.890 00:10:35.890 Admin Command Set Attributes 00:10:35.890 ============================ 00:10:35.890 Security Send/Receive: Not Supported 00:10:35.890 Format NVM: Supported 00:10:35.890 Firmware Activate/Download: Not Supported 00:10:35.890 Namespace Management: Supported 00:10:35.890 Device Self-Test: Not Supported 00:10:35.890 Directives: Supported 00:10:35.890 NVMe-MI: Not Supported 00:10:35.890 Virtualization Management: Not Supported 00:10:35.890 Doorbell Buffer Config: Supported 00:10:35.890 Get LBA Status Capability: Not Supported 00:10:35.890 Command & Feature Lockdown Capability: Not Supported 00:10:35.890 Abort Command Limit: 4 00:10:35.890 Async Event Request Limit: 4 00:10:35.890 Number of Firmware Slots: N/A 00:10:35.890 Firmware Slot 1 Read-Only: N/A 00:10:35.890 Firmware Activation Without Reset: N/A 00:10:35.890 Multiple Update Detection Support: N/A 00:10:35.890 Firmware Update Granularity: No Information Provided 00:10:35.890 Per-Namespace SMART Log: Yes 00:10:35.890 Asymmetric Namespace Access Log Page: Not Supported 00:10:35.890 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:35.890 Command Effects Log Page: Supported 00:10:35.890 Get Log Page Extended Data: Supported 00:10:35.890 Telemetry Log Pages: Not Supported 00:10:35.890 Persistent Event Log Pages: Not Supported 00:10:35.890 Supported Log Pages Log Page: May Support 00:10:35.890 Commands Supported & Effects Log Page: Not Supported 00:10:35.890 Feature Identifiers & Effects Log Page:May Support 00:10:35.890 NVMe-MI Commands & Effects Log Page: May Support 00:10:35.890 Data Area 4 for Telemetry Log: Not Supported 00:10:35.890 Error Log Page Entries Supported: 1 00:10:35.890 Keep Alive: Not Supported 00:10:35.890 00:10:35.890 NVM Command Set Attributes 00:10:35.890 ========================== 00:10:35.890 Submission Queue Entry Size 00:10:35.890 Max: 64 00:10:35.890 Min: 64 00:10:35.890 Completion Queue Entry Size 00:10:35.890 Max: 16 00:10:35.890 Min: 16 00:10:35.890 Number of Namespaces: 256 00:10:35.890 Compare Command: Supported 00:10:35.890 Write Uncorrectable Command: Not Supported 00:10:35.890 Dataset Management Command: Supported 00:10:35.890 Write Zeroes Command: Supported 00:10:35.890 Set Features Save Field: Supported 00:10:35.890 Reservations: Not Supported 00:10:35.890 Timestamp: Supported 00:10:35.890 Copy: Supported 00:10:35.890 Volatile Write Cache: Present 00:10:35.890 Atomic Write Unit (Normal): 1 00:10:35.890 Atomic Write Unit (PFail): 1 00:10:35.890 Atomic Compare & Write Unit: 1 00:10:35.890 Fused Compare & Write: Not Supported 00:10:35.890 Scatter-Gather List 00:10:35.890 SGL Command Set: Supported 00:10:35.890 SGL Keyed: Not Supported 00:10:35.890 SGL Bit Bucket Descriptor: Not Supported 00:10:35.890 SGL Metadata Pointer: Not Supported 00:10:35.890 Oversized SGL: Not Supported 00:10:35.890 SGL Metadata Address: Not Supported 00:10:35.890 SGL Offset: Not Supported 00:10:35.890 Transport SGL Data Block: Not Supported 00:10:35.890 Replay Protected Memory Block: Not Supported 00:10:35.890 00:10:35.890 Firmware Slot Information 00:10:35.890 ========================= 00:10:35.890 Active slot: 1 00:10:35.890 Slot 1 Firmware Revision: 1.0 00:10:35.890 00:10:35.890 00:10:35.890 Commands Supported and Effects 00:10:35.890 ============================== 00:10:35.890 Admin Commands 00:10:35.890 -------------- 00:10:35.890 Delete I/O Submission Queue (00h): Supported 00:10:35.890 Create I/O Submission Queue (01h): Supported 00:10:35.890 Get Log Page (02h): Supported 00:10:35.890 Delete I/O Completion Queue (04h): Supported 00:10:35.890 Create I/O Completion Queue (05h): Supported 00:10:35.890 Identify (06h): Supported 00:10:35.890 Abort (08h): Supported 00:10:35.890 Set Features (09h): Supported 00:10:35.890 Get Features (0Ah): Supported 00:10:35.890 Asynchronous Event Request (0Ch): Supported 00:10:35.890 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:35.890 Directive Send (19h): Supported 00:10:35.890 Directive Receive (1Ah): Supported 00:10:35.890 Virtualization Management (1Ch): Supported 00:10:35.890 Doorbell Buffer Config (7Ch): Supported 00:10:35.890 Format NVM (80h): Supported LBA-Change 00:10:35.890 I/O Commands 00:10:35.890 ------------ 00:10:35.890 Flush (00h): Supported LBA-Change 00:10:35.890 Write (01h): Supported LBA-Change 00:10:35.890 Read (02h): Supported 00:10:35.890 Compare (05h): Supported 00:10:35.890 Write Zeroes (08h): Supported LBA-Change 00:10:35.890 Dataset Management (09h): Supported LBA-Change 00:10:35.890 Unknown (0Ch): Supported 00:10:35.890 Unknown (12h): Supported 00:10:35.890 Copy (19h): Supported LBA-Change 00:10:35.890 Unknown (1Dh): Supported LBA-Change 00:10:35.890 00:10:35.890 Error Log 00:10:35.890 ========= 00:10:35.890 00:10:35.890 Arbitration 00:10:35.890 =========== 00:10:35.890 Arbitration Burst: no limit 00:10:35.890 00:10:35.890 Power Management 00:10:35.890 ================ 00:10:35.890 Number of Power States: 1 00:10:35.890 Current Power State: Power State #0 00:10:35.890 Power State #0: 00:10:35.890 Max Power: 25.00 W 00:10:35.890 Non-Operational State: Operational 00:10:35.890 Entry Latency: 16 microseconds 00:10:35.890 Exit Latency: 4 microseconds 00:10:35.890 Relative Read Throughput: 0 00:10:35.890 Relative Read Latency: 0 00:10:35.890 Relative Write Throughput: 0 00:10:35.890 Relative Write Latency: 0 00:10:35.890 Idle Power: Not Reported 00:10:35.890 Active Power: Not Reported 00:10:35.890 Non-Operational Permissive Mode: Not Supported 00:10:35.890 00:10:35.890 Health Information 00:10:35.890 ================== 00:10:35.890 Critical Warnings: 00:10:35.890 Available Spare Space: OK 00:10:35.891 Temperature: OK 00:10:35.891 Device Reliability: OK 00:10:35.891 Read Only: No 00:10:35.891 Volatile Memory Backup: OK 00:10:35.891 Current Temperature: 323 Kelvin (50 Celsius) 00:10:35.891 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:35.891 Available Spare: 0% 00:10:35.891 Available Spare Threshold: 0% 00:10:35.891 Life Percentage Used: 0% 00:10:35.891 Data Units Read: 771 00:10:35.891 Data Units Written: 700 00:10:35.891 Host Read Commands: 33880 00:10:35.891 Host Write Commands: 33303 00:10:35.891 Controller Busy Time: 0 minutes 00:10:35.891 Power Cycles: 0 00:10:35.891 Power On Hours: 0 hours 00:10:35.891 Unsafe Shutdowns: 0 00:10:35.891 Unrecoverable Media Errors: 0 00:10:35.891 Lifetime Error Log Entries: 0 00:10:35.891 Warning Temperature Time: 0 minutes 00:10:35.891 Critical Temperature Time: 0 minutes 00:10:35.891 00:10:35.891 Number of Queues 00:10:35.891 ================ 00:10:35.891 Number of I/O Submission Queues: 64 00:10:35.891 Number of I/O Completion Queues: 64 00:10:35.891 00:10:35.891 ZNS Specific Controller Data 00:10:35.891 ============================ 00:10:35.891 Zone Append Size Limit: 0 00:10:35.891 00:10:35.891 00:10:35.891 Active Namespaces 00:10:35.891 ================= 00:10:35.891 Namespace ID:1 00:10:35.891 Error Recovery Timeout: Unlimited 00:10:35.891 Command Set Identifier: NVM (00h) 00:10:35.891 Deallocate: Supported 00:10:35.891 Deallocated/Unwritten Error: Supported 00:10:35.891 Deallocated Read Value: All 0x00 00:10:35.891 Deallocate in Write Zeroes: Not Supported 00:10:35.891 Deallocated Guard Field: 0xFFFF 00:10:35.891 Flush: Supported 00:10:35.891 Reservation: Not Supported 00:10:35.891 Namespace Sharing Capabilities: Multiple Controllers 00:10:35.891 Size (in LBAs): 262144 (1GiB) 00:10:35.891 Capacity (in LBAs): 262144 (1GiB) 00:10:35.891 Utilization (in LBAs): 262144 (1GiB) 00:10:35.891 Thin Provisioning: Not Supported 00:10:35.891 Per-NS Atomic Units: No 00:10:35.891 Maximum Single Source Range Length: 128 00:10:35.891 Maximum Copy Length: 128 00:10:35.891 Maximum Source Range Count: 128 00:10:35.891 NGUID/EUI64 Never Reused: No 00:10:35.891 Namespace Write Protected: No 00:10:35.891 Endurance group ID: 1 00:10:35.891 Number of LBA Formats: 8 00:10:35.891 Current LBA Format: LBA Format #04 00:10:35.891 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:35.891 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:35.891 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:35.891 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:35.891 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:35.891 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:35.891 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:35.891 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:35.891 00:10:35.891 Get Feature FDP: 00:10:35.891 ================ 00:10:35.891 Enabled: Yes 00:10:35.891 FDP configuration index: 0 00:10:35.891 00:10:35.891 FDP configurations log page 00:10:35.891 =========================== 00:10:35.891 Number of FDP configurations: 1 00:10:35.891 Version: 0 00:10:35.891 Size: 112 00:10:35.891 FDP Configuration Descriptor: 0 00:10:35.891 Descriptor Size: 96 00:10:35.891 Reclaim Group Identifier format: 2 00:10:35.891 FDP Volatile Write Cache: Not Present 00:10:35.891 FDP Configuration: Valid 00:10:35.891 Vendor Specific Size: 0 00:10:35.891 Number of Reclaim Groups: 2 00:10:35.891 Number of Recalim Unit Handles: 8 00:10:35.891 Max Placement Identifiers: 128 00:10:35.891 Number of Namespaces Suppprted: 256 00:10:35.891 Reclaim unit Nominal Size: 6000000 bytes 00:10:35.891 Estimated Reclaim Unit Time Limit: Not Reported 00:10:35.891 RUH Desc #000: RUH Type: Initially Isolated 00:10:35.891 RUH Desc #001: RUH Type: Initially Isolated 00:10:35.891 RUH Desc #002: RUH Type: Initially Isolated 00:10:35.891 RUH Desc #003: RUH Type: Initially Isolated 00:10:35.891 RUH Desc #004: RUH Type: Initially Isolated 00:10:35.891 RUH Desc #005: RUH Type: Initially Isolated 00:10:35.891 RUH Desc #006: RUH Type: Initially Isolated 00:10:35.891 RUH Desc #007: RUH Type: Initially Isolated 00:10:35.891 00:10:35.891 FDP reclaim unit handle usage log page 00:10:36.149 ====================================== 00:10:36.149 Number of Reclaim Unit Handles: 8 00:10:36.149 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:36.149 RUH Usage Desc #001: RUH Attributes: Unused 00:10:36.149 RUH Usage Desc #002: RUH Attributes: Unused 00:10:36.149 RUH Usage Desc #003: RUH Attributes: Unused 00:10:36.149 RUH Usage Desc #004: RUH Attributes: Unused 00:10:36.150 RUH Usage Desc #005: RUH Attributes: Unused 00:10:36.150 RUH Usage Desc #006: RUH Attributes: Unused 00:10:36.150 RUH Usage Desc #007: RUH Attributes: Unused 00:10:36.150 00:10:36.150 FDP statistics log page 00:10:36.150 ======================= 00:10:36.150 Host bytes with metadata written: 442277888 00:10:36.150 Media bytes with metadata written: 442343424 00:10:36.150 Media bytes erased: 0 00:10:36.150 00:10:36.150 FDP events log page 00:10:36.150 =================== 00:10:36.150 Number of FDP events: 0 00:10:36.150 00:10:36.150 NVM Specific Namespace Data 00:10:36.150 =========================== 00:10:36.150 Logical Block Storage Tag Mask: 0 00:10:36.150 Protection Information Capabilities: 00:10:36.150 16b Guard Protection Information Storage Tag Support: No 00:10:36.150 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:10:36.150 Storage Tag Check Read Support: No 00:10:36.150 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.150 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.150 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.150 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.150 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.150 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.150 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.150 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:10:36.150 ************************************ 00:10:36.150 END TEST nvme_identify 00:10:36.150 ************************************ 00:10:36.150 00:10:36.150 real 0m1.901s 00:10:36.150 user 0m0.731s 00:10:36.150 sys 0m0.936s 00:10:36.150 18:12:10 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.150 18:12:10 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:10:36.150 18:12:10 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:36.150 18:12:10 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:36.150 18:12:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.150 18:12:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:36.150 ************************************ 00:10:36.150 START TEST nvme_perf 00:10:36.150 ************************************ 00:10:36.150 18:12:10 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:10:36.150 18:12:10 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:37.526 Initializing NVMe Controllers 00:10:37.526 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:37.526 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:37.526 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:37.526 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:37.526 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:37.526 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:37.526 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:37.526 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:37.526 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:37.526 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:37.526 Initialization complete. Launching workers. 00:10:37.526 ======================================================== 00:10:37.526 Latency(us) 00:10:37.526 Device Information : IOPS MiB/s Average min max 00:10:37.526 PCIE (0000:00:10.0) NSID 1 from core 0: 12063.31 141.37 10628.15 8260.00 40027.08 00:10:37.526 PCIE (0000:00:11.0) NSID 1 from core 0: 12063.31 141.37 10607.81 8406.34 37741.14 00:10:37.526 PCIE (0000:00:13.0) NSID 1 from core 0: 12063.31 141.37 10585.17 8358.55 36102.25 00:10:37.526 PCIE (0000:00:12.0) NSID 1 from core 0: 12063.31 141.37 10561.68 8358.45 33783.51 00:10:37.526 PCIE (0000:00:12.0) NSID 2 from core 0: 12063.31 141.37 10538.72 8361.05 31438.46 00:10:37.526 PCIE (0000:00:12.0) NSID 3 from core 0: 12063.31 141.37 10515.38 8381.84 29096.92 00:10:37.526 ======================================================== 00:10:37.526 Total : 72379.85 848.20 10572.82 8260.00 40027.08 00:10:37.526 00:10:37.526 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:37.526 ================================================================================= 00:10:37.526 1.00000% : 8579.258us 00:10:37.526 10.00000% : 9234.618us 00:10:37.526 25.00000% : 9830.400us 00:10:37.526 50.00000% : 10426.182us 00:10:37.526 75.00000% : 10962.385us 00:10:37.526 90.00000% : 11439.011us 00:10:37.526 95.00000% : 12034.793us 00:10:37.526 98.00000% : 13166.778us 00:10:37.526 99.00000% : 29908.247us 00:10:37.526 99.50000% : 37891.724us 00:10:37.526 99.90000% : 39798.225us 00:10:37.526 99.99000% : 40036.538us 00:10:37.526 99.99900% : 40036.538us 00:10:37.526 99.99990% : 40036.538us 00:10:37.526 99.99999% : 40036.538us 00:10:37.526 00:10:37.526 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:37.526 ================================================================================= 00:10:37.526 1.00000% : 8698.415us 00:10:37.526 10.00000% : 9234.618us 00:10:37.526 25.00000% : 9830.400us 00:10:37.526 50.00000% : 10426.182us 00:10:37.526 75.00000% : 10902.807us 00:10:37.526 90.00000% : 11379.433us 00:10:37.526 95.00000% : 11975.215us 00:10:37.526 98.00000% : 13285.935us 00:10:37.526 99.00000% : 28835.840us 00:10:37.526 99.50000% : 35746.909us 00:10:37.526 99.90000% : 37415.098us 00:10:37.526 99.99000% : 37891.724us 00:10:37.526 99.99900% : 37891.724us 00:10:37.526 99.99990% : 37891.724us 00:10:37.526 99.99999% : 37891.724us 00:10:37.526 00:10:37.526 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:37.526 ================================================================================= 00:10:37.526 1.00000% : 8698.415us 00:10:37.526 10.00000% : 9234.618us 00:10:37.526 25.00000% : 9830.400us 00:10:37.526 50.00000% : 10426.182us 00:10:37.526 75.00000% : 10902.807us 00:10:37.526 90.00000% : 11379.433us 00:10:37.526 95.00000% : 11856.058us 00:10:37.526 98.00000% : 13762.560us 00:10:37.526 99.00000% : 27167.651us 00:10:37.526 99.50000% : 34078.720us 00:10:37.526 99.90000% : 35746.909us 00:10:37.526 99.99000% : 36223.535us 00:10:37.526 99.99900% : 36223.535us 00:10:37.526 99.99990% : 36223.535us 00:10:37.526 99.99999% : 36223.535us 00:10:37.526 00:10:37.526 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:37.526 ================================================================================= 00:10:37.526 1.00000% : 8698.415us 00:10:37.526 10.00000% : 9294.196us 00:10:37.526 25.00000% : 9830.400us 00:10:37.526 50.00000% : 10426.182us 00:10:37.526 75.00000% : 10902.807us 00:10:37.526 90.00000% : 11379.433us 00:10:37.526 95.00000% : 11856.058us 00:10:37.526 98.00000% : 12690.153us 00:10:37.526 99.00000% : 24784.524us 00:10:37.526 99.50000% : 31695.593us 00:10:37.526 99.90000% : 33363.782us 00:10:37.526 99.99000% : 33840.407us 00:10:37.526 99.99900% : 33840.407us 00:10:37.526 99.99990% : 33840.407us 00:10:37.526 99.99999% : 33840.407us 00:10:37.526 00:10:37.526 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:37.526 ================================================================================= 00:10:37.526 1.00000% : 8698.415us 00:10:37.526 10.00000% : 9234.618us 00:10:37.526 25.00000% : 9830.400us 00:10:37.526 50.00000% : 10426.182us 00:10:37.526 75.00000% : 10902.807us 00:10:37.526 90.00000% : 11439.011us 00:10:37.526 95.00000% : 11915.636us 00:10:37.526 98.00000% : 12809.309us 00:10:37.526 99.00000% : 22520.553us 00:10:37.526 99.50000% : 29312.465us 00:10:37.526 99.90000% : 31218.967us 00:10:37.526 99.99000% : 31457.280us 00:10:37.526 99.99900% : 31457.280us 00:10:37.526 99.99990% : 31457.280us 00:10:37.526 99.99999% : 31457.280us 00:10:37.526 00:10:37.526 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:37.526 ================================================================================= 00:10:37.526 1.00000% : 8698.415us 00:10:37.526 10.00000% : 9294.196us 00:10:37.526 25.00000% : 9830.400us 00:10:37.526 50.00000% : 10426.182us 00:10:37.526 75.00000% : 10902.807us 00:10:37.526 90.00000% : 11379.433us 00:10:37.526 95.00000% : 11975.215us 00:10:37.526 98.00000% : 13107.200us 00:10:37.526 99.00000% : 20256.582us 00:10:37.526 99.50000% : 27048.495us 00:10:37.526 99.90000% : 28716.684us 00:10:37.526 99.99000% : 29074.153us 00:10:37.526 99.99900% : 29193.309us 00:10:37.526 99.99990% : 29193.309us 00:10:37.526 99.99999% : 29193.309us 00:10:37.526 00:10:37.526 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:37.526 ============================================================================== 00:10:37.526 Range in us Cumulative IO count 00:10:37.526 8221.789 - 8281.367: 0.0331% ( 4) 00:10:37.526 8281.367 - 8340.945: 0.1488% ( 14) 00:10:37.526 8340.945 - 8400.524: 0.3803% ( 28) 00:10:37.526 8400.524 - 8460.102: 0.5622% ( 22) 00:10:37.526 8460.102 - 8519.680: 0.8598% ( 36) 00:10:37.526 8519.680 - 8579.258: 1.2070% ( 42) 00:10:37.526 8579.258 - 8638.836: 1.6452% ( 53) 00:10:37.526 8638.836 - 8698.415: 2.4306% ( 95) 00:10:37.526 8698.415 - 8757.993: 3.1663% ( 89) 00:10:37.526 8757.993 - 8817.571: 3.9683% ( 97) 00:10:37.526 8817.571 - 8877.149: 4.8115% ( 102) 00:10:37.526 8877.149 - 8936.727: 5.7044% ( 108) 00:10:37.526 8936.727 - 8996.305: 6.6468% ( 114) 00:10:37.526 8996.305 - 9055.884: 7.5066% ( 104) 00:10:37.526 9055.884 - 9115.462: 8.4491% ( 114) 00:10:37.526 9115.462 - 9175.040: 9.3585% ( 110) 00:10:37.526 9175.040 - 9234.618: 10.3423% ( 119) 00:10:37.526 9234.618 - 9294.196: 11.4335% ( 132) 00:10:37.526 9294.196 - 9353.775: 12.5331% ( 133) 00:10:37.526 9353.775 - 9413.353: 13.6326% ( 133) 00:10:37.526 9413.353 - 9472.931: 15.0711% ( 174) 00:10:37.526 9472.931 - 9532.509: 16.2864% ( 147) 00:10:37.526 9532.509 - 9592.087: 17.8819% ( 193) 00:10:37.526 9592.087 - 9651.665: 19.6429% ( 213) 00:10:37.526 9651.665 - 9711.244: 21.5774% ( 234) 00:10:37.527 9711.244 - 9770.822: 23.5367% ( 237) 00:10:37.527 9770.822 - 9830.400: 25.9094% ( 287) 00:10:37.527 9830.400 - 9889.978: 28.2159% ( 279) 00:10:37.527 9889.978 - 9949.556: 30.7870% ( 311) 00:10:37.527 9949.556 - 10009.135: 33.2920% ( 303) 00:10:37.527 10009.135 - 10068.713: 35.9292% ( 319) 00:10:37.527 10068.713 - 10128.291: 38.5499% ( 317) 00:10:37.527 10128.291 - 10187.869: 41.0632% ( 304) 00:10:37.527 10187.869 - 10247.447: 43.8575% ( 338) 00:10:37.527 10247.447 - 10307.025: 46.5856% ( 330) 00:10:37.527 10307.025 - 10366.604: 49.2560% ( 323) 00:10:37.527 10366.604 - 10426.182: 52.1577% ( 351) 00:10:37.527 10426.182 - 10485.760: 55.0843% ( 354) 00:10:37.527 10485.760 - 10545.338: 57.9861% ( 351) 00:10:37.527 10545.338 - 10604.916: 60.8548% ( 347) 00:10:37.527 10604.916 - 10664.495: 63.7814% ( 354) 00:10:37.527 10664.495 - 10724.073: 66.5261% ( 332) 00:10:37.527 10724.073 - 10783.651: 69.4610% ( 355) 00:10:37.527 10783.651 - 10843.229: 72.0238% ( 310) 00:10:37.527 10843.229 - 10902.807: 74.6445% ( 317) 00:10:37.527 10902.807 - 10962.385: 76.8601% ( 268) 00:10:37.527 10962.385 - 11021.964: 79.1832% ( 281) 00:10:37.527 11021.964 - 11081.542: 81.3327% ( 260) 00:10:37.527 11081.542 - 11141.120: 83.2755% ( 235) 00:10:37.527 11141.120 - 11200.698: 85.0446% ( 214) 00:10:37.527 11200.698 - 11260.276: 86.6815% ( 198) 00:10:37.527 11260.276 - 11319.855: 88.1035% ( 172) 00:10:37.527 11319.855 - 11379.433: 89.2609% ( 140) 00:10:37.527 11379.433 - 11439.011: 90.3026% ( 126) 00:10:37.527 11439.011 - 11498.589: 91.0466% ( 90) 00:10:37.527 11498.589 - 11558.167: 91.8072% ( 92) 00:10:37.527 11558.167 - 11617.745: 92.4355% ( 76) 00:10:37.527 11617.745 - 11677.324: 93.0721% ( 77) 00:10:37.527 11677.324 - 11736.902: 93.5681% ( 60) 00:10:37.527 11736.902 - 11796.480: 93.9567% ( 47) 00:10:37.527 11796.480 - 11856.058: 94.2708% ( 38) 00:10:37.527 11856.058 - 11915.636: 94.6263% ( 43) 00:10:37.527 11915.636 - 11975.215: 94.8661% ( 29) 00:10:37.527 11975.215 - 12034.793: 95.0645% ( 24) 00:10:37.527 12034.793 - 12094.371: 95.2712% ( 25) 00:10:37.527 12094.371 - 12153.949: 95.5522% ( 34) 00:10:37.527 12153.949 - 12213.527: 95.7672% ( 26) 00:10:37.527 12213.527 - 12273.105: 95.9573% ( 23) 00:10:37.527 12273.105 - 12332.684: 96.1475% ( 23) 00:10:37.527 12332.684 - 12392.262: 96.3046% ( 19) 00:10:37.527 12392.262 - 12451.840: 96.5278% ( 27) 00:10:37.527 12451.840 - 12511.418: 96.7014% ( 21) 00:10:37.527 12511.418 - 12570.996: 96.9081% ( 25) 00:10:37.527 12570.996 - 12630.575: 97.0569% ( 18) 00:10:37.527 12630.575 - 12690.153: 97.2057% ( 18) 00:10:37.527 12690.153 - 12749.731: 97.2966% ( 11) 00:10:37.527 12749.731 - 12809.309: 97.4206% ( 15) 00:10:37.527 12809.309 - 12868.887: 97.5116% ( 11) 00:10:37.527 12868.887 - 12928.465: 97.6438% ( 16) 00:10:37.527 12928.465 - 12988.044: 97.7348% ( 11) 00:10:37.527 12988.044 - 13047.622: 97.8423% ( 13) 00:10:37.527 13047.622 - 13107.200: 97.9249% ( 10) 00:10:37.527 13107.200 - 13166.778: 98.0076% ( 10) 00:10:37.527 13166.778 - 13226.356: 98.0985% ( 11) 00:10:37.527 13226.356 - 13285.935: 98.1978% ( 12) 00:10:37.527 13285.935 - 13345.513: 98.2639% ( 8) 00:10:37.527 13345.513 - 13405.091: 98.3383% ( 9) 00:10:37.527 13405.091 - 13464.669: 98.3879% ( 6) 00:10:37.527 13464.669 - 13524.247: 98.4623% ( 9) 00:10:37.527 13524.247 - 13583.825: 98.5284% ( 8) 00:10:37.527 13583.825 - 13643.404: 98.5532% ( 3) 00:10:37.527 13643.404 - 13702.982: 98.6524% ( 12) 00:10:37.527 13702.982 - 13762.560: 98.6938% ( 5) 00:10:37.527 13762.560 - 13822.138: 98.7517% ( 7) 00:10:37.527 13822.138 - 13881.716: 98.8261% ( 9) 00:10:37.527 13881.716 - 13941.295: 98.8509% ( 3) 00:10:37.527 13941.295 - 14000.873: 98.8839% ( 4) 00:10:37.527 14000.873 - 14060.451: 98.9170% ( 4) 00:10:37.527 14060.451 - 14120.029: 98.9253% ( 1) 00:10:37.527 14120.029 - 14179.607: 98.9418% ( 2) 00:10:37.527 29550.778 - 29669.935: 98.9583% ( 2) 00:10:37.527 29669.935 - 29789.091: 98.9831% ( 3) 00:10:37.527 29789.091 - 29908.247: 99.0245% ( 5) 00:10:37.527 29908.247 - 30027.404: 99.0410% ( 2) 00:10:37.527 30027.404 - 30146.560: 99.0658% ( 3) 00:10:37.527 30146.560 - 30265.716: 99.0906% ( 3) 00:10:37.527 30265.716 - 30384.873: 99.1237% ( 4) 00:10:37.527 30384.873 - 30504.029: 99.1485% ( 3) 00:10:37.527 30504.029 - 30742.342: 99.1981% ( 6) 00:10:37.527 30742.342 - 30980.655: 99.2560% ( 7) 00:10:37.527 30980.655 - 31218.967: 99.3056% ( 6) 00:10:37.527 31218.967 - 31457.280: 99.3634% ( 7) 00:10:37.527 31457.280 - 31695.593: 99.4213% ( 7) 00:10:37.527 31695.593 - 31933.905: 99.4709% ( 6) 00:10:37.527 37415.098 - 37653.411: 99.4874% ( 2) 00:10:37.527 37653.411 - 37891.724: 99.5288% ( 5) 00:10:37.527 37891.724 - 38130.036: 99.5784% ( 6) 00:10:37.527 38130.036 - 38368.349: 99.6280% ( 6) 00:10:37.527 38368.349 - 38606.662: 99.6776% ( 6) 00:10:37.527 38606.662 - 38844.975: 99.7354% ( 7) 00:10:37.527 38844.975 - 39083.287: 99.7851% ( 6) 00:10:37.527 39083.287 - 39321.600: 99.8429% ( 7) 00:10:37.527 39321.600 - 39559.913: 99.8925% ( 6) 00:10:37.527 39559.913 - 39798.225: 99.9504% ( 7) 00:10:37.527 39798.225 - 40036.538: 100.0000% ( 6) 00:10:37.527 00:10:37.527 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:37.527 ============================================================================== 00:10:37.527 Range in us Cumulative IO count 00:10:37.527 8400.524 - 8460.102: 0.1240% ( 15) 00:10:37.527 8460.102 - 8519.680: 0.3224% ( 24) 00:10:37.527 8519.680 - 8579.258: 0.5870% ( 32) 00:10:37.527 8579.258 - 8638.836: 0.8846% ( 36) 00:10:37.527 8638.836 - 8698.415: 1.3145% ( 52) 00:10:37.527 8698.415 - 8757.993: 1.9511% ( 77) 00:10:37.527 8757.993 - 8817.571: 2.7695% ( 99) 00:10:37.527 8817.571 - 8877.149: 3.7368% ( 117) 00:10:37.527 8877.149 - 8936.727: 4.7619% ( 124) 00:10:37.527 8936.727 - 8996.305: 5.7705% ( 122) 00:10:37.527 8996.305 - 9055.884: 6.7791% ( 122) 00:10:37.527 9055.884 - 9115.462: 7.9117% ( 137) 00:10:37.527 9115.462 - 9175.040: 8.9864% ( 130) 00:10:37.527 9175.040 - 9234.618: 10.1356% ( 139) 00:10:37.527 9234.618 - 9294.196: 11.3509% ( 147) 00:10:37.527 9294.196 - 9353.775: 12.5165% ( 141) 00:10:37.527 9353.775 - 9413.353: 13.7897% ( 154) 00:10:37.527 9413.353 - 9472.931: 15.0711% ( 155) 00:10:37.527 9472.931 - 9532.509: 16.5013% ( 173) 00:10:37.527 9532.509 - 9592.087: 18.0556% ( 188) 00:10:37.527 9592.087 - 9651.665: 19.5850% ( 185) 00:10:37.527 9651.665 - 9711.244: 21.1806% ( 193) 00:10:37.527 9711.244 - 9770.822: 22.9828% ( 218) 00:10:37.527 9770.822 - 9830.400: 25.0083% ( 245) 00:10:37.527 9830.400 - 9889.978: 27.1081% ( 254) 00:10:37.527 9889.978 - 9949.556: 29.4643% ( 285) 00:10:37.527 9949.556 - 10009.135: 31.7543% ( 277) 00:10:37.527 10009.135 - 10068.713: 34.2593% ( 303) 00:10:37.527 10068.713 - 10128.291: 36.8552% ( 314) 00:10:37.527 10128.291 - 10187.869: 39.6412% ( 337) 00:10:37.527 10187.869 - 10247.447: 42.4355% ( 338) 00:10:37.527 10247.447 - 10307.025: 45.4034% ( 359) 00:10:37.527 10307.025 - 10366.604: 48.5367% ( 379) 00:10:37.527 10366.604 - 10426.182: 51.8188% ( 397) 00:10:37.527 10426.182 - 10485.760: 55.0926% ( 396) 00:10:37.527 10485.760 - 10545.338: 58.3416% ( 393) 00:10:37.527 10545.338 - 10604.916: 61.5079% ( 383) 00:10:37.527 10604.916 - 10664.495: 64.7404% ( 391) 00:10:37.527 10664.495 - 10724.073: 68.0308% ( 398) 00:10:37.527 10724.073 - 10783.651: 71.1640% ( 379) 00:10:37.527 10783.651 - 10843.229: 73.9666% ( 339) 00:10:37.527 10843.229 - 10902.807: 76.7692% ( 339) 00:10:37.527 10902.807 - 10962.385: 79.2245% ( 297) 00:10:37.527 10962.385 - 11021.964: 81.5063% ( 276) 00:10:37.527 11021.964 - 11081.542: 83.4904% ( 240) 00:10:37.527 11081.542 - 11141.120: 85.2844% ( 217) 00:10:37.527 11141.120 - 11200.698: 86.7973% ( 183) 00:10:37.527 11200.698 - 11260.276: 88.1035% ( 158) 00:10:37.527 11260.276 - 11319.855: 89.2444% ( 138) 00:10:37.527 11319.855 - 11379.433: 90.1951% ( 115) 00:10:37.527 11379.433 - 11439.011: 91.1210% ( 112) 00:10:37.527 11439.011 - 11498.589: 91.8899% ( 93) 00:10:37.527 11498.589 - 11558.167: 92.5265% ( 77) 00:10:37.527 11558.167 - 11617.745: 93.0638% ( 65) 00:10:37.527 11617.745 - 11677.324: 93.5681% ( 61) 00:10:37.527 11677.324 - 11736.902: 93.9732% ( 49) 00:10:37.527 11736.902 - 11796.480: 94.3535% ( 46) 00:10:37.527 11796.480 - 11856.058: 94.6511% ( 36) 00:10:37.527 11856.058 - 11915.636: 94.9405% ( 35) 00:10:37.527 11915.636 - 11975.215: 95.1968% ( 31) 00:10:37.527 11975.215 - 12034.793: 95.4365% ( 29) 00:10:37.527 12034.793 - 12094.371: 95.6515% ( 26) 00:10:37.527 12094.371 - 12153.949: 95.8747% ( 27) 00:10:37.527 12153.949 - 12213.527: 96.1144% ( 29) 00:10:37.527 12213.527 - 12273.105: 96.3294% ( 26) 00:10:37.527 12273.105 - 12332.684: 96.5030% ( 21) 00:10:37.527 12332.684 - 12392.262: 96.6435% ( 17) 00:10:37.527 12392.262 - 12451.840: 96.7923% ( 18) 00:10:37.527 12451.840 - 12511.418: 96.9081% ( 14) 00:10:37.527 12511.418 - 12570.996: 97.0155% ( 13) 00:10:37.527 12570.996 - 12630.575: 97.1478% ( 16) 00:10:37.527 12630.575 - 12690.153: 97.2388% ( 11) 00:10:37.527 12690.153 - 12749.731: 97.3214% ( 10) 00:10:37.527 12749.731 - 12809.309: 97.4206% ( 12) 00:10:37.527 12809.309 - 12868.887: 97.5198% ( 12) 00:10:37.527 12868.887 - 12928.465: 97.5860% ( 8) 00:10:37.527 12928.465 - 12988.044: 97.6769% ( 11) 00:10:37.527 12988.044 - 13047.622: 97.7431% ( 8) 00:10:37.527 13047.622 - 13107.200: 97.8340% ( 11) 00:10:37.528 13107.200 - 13166.778: 97.9167% ( 10) 00:10:37.528 13166.778 - 13226.356: 97.9911% ( 9) 00:10:37.528 13226.356 - 13285.935: 98.0655% ( 9) 00:10:37.528 13285.935 - 13345.513: 98.1564% ( 11) 00:10:37.528 13345.513 - 13405.091: 98.2391% ( 10) 00:10:37.528 13405.091 - 13464.669: 98.3135% ( 9) 00:10:37.528 13464.669 - 13524.247: 98.3962% ( 10) 00:10:37.528 13524.247 - 13583.825: 98.4623% ( 8) 00:10:37.528 13583.825 - 13643.404: 98.5450% ( 10) 00:10:37.528 13643.404 - 13702.982: 98.6194% ( 9) 00:10:37.528 13702.982 - 13762.560: 98.6772% ( 7) 00:10:37.528 13762.560 - 13822.138: 98.7351% ( 7) 00:10:37.528 13822.138 - 13881.716: 98.7847% ( 6) 00:10:37.528 13881.716 - 13941.295: 98.8343% ( 6) 00:10:37.528 13941.295 - 14000.873: 98.8922% ( 7) 00:10:37.528 14000.873 - 14060.451: 98.9170% ( 3) 00:10:37.528 14060.451 - 14120.029: 98.9418% ( 3) 00:10:37.528 28478.371 - 28597.527: 98.9501% ( 1) 00:10:37.528 28597.527 - 28716.684: 98.9749% ( 3) 00:10:37.528 28716.684 - 28835.840: 99.0079% ( 4) 00:10:37.528 28835.840 - 28954.996: 99.0327% ( 3) 00:10:37.528 28954.996 - 29074.153: 99.0575% ( 3) 00:10:37.528 29074.153 - 29193.309: 99.0906% ( 4) 00:10:37.528 29193.309 - 29312.465: 99.1154% ( 3) 00:10:37.528 29312.465 - 29431.622: 99.1485% ( 4) 00:10:37.528 29431.622 - 29550.778: 99.1733% ( 3) 00:10:37.528 29550.778 - 29669.935: 99.1981% ( 3) 00:10:37.528 29669.935 - 29789.091: 99.2229% ( 3) 00:10:37.528 29789.091 - 29908.247: 99.2477% ( 3) 00:10:37.528 29908.247 - 30027.404: 99.2725% ( 3) 00:10:37.528 30027.404 - 30146.560: 99.2973% ( 3) 00:10:37.528 30146.560 - 30265.716: 99.3221% ( 3) 00:10:37.528 30265.716 - 30384.873: 99.3552% ( 4) 00:10:37.528 30384.873 - 30504.029: 99.3800% ( 3) 00:10:37.528 30504.029 - 30742.342: 99.4378% ( 7) 00:10:37.528 30742.342 - 30980.655: 99.4709% ( 4) 00:10:37.528 35508.596 - 35746.909: 99.5122% ( 5) 00:10:37.528 35746.909 - 35985.222: 99.5701% ( 7) 00:10:37.528 35985.222 - 36223.535: 99.6280% ( 7) 00:10:37.528 36223.535 - 36461.847: 99.6858% ( 7) 00:10:37.528 36461.847 - 36700.160: 99.7437% ( 7) 00:10:37.528 36700.160 - 36938.473: 99.8016% ( 7) 00:10:37.528 36938.473 - 37176.785: 99.8595% ( 7) 00:10:37.528 37176.785 - 37415.098: 99.9173% ( 7) 00:10:37.528 37415.098 - 37653.411: 99.9752% ( 7) 00:10:37.528 37653.411 - 37891.724: 100.0000% ( 3) 00:10:37.528 00:10:37.528 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:37.528 ============================================================================== 00:10:37.528 Range in us Cumulative IO count 00:10:37.528 8340.945 - 8400.524: 0.0331% ( 4) 00:10:37.528 8400.524 - 8460.102: 0.1157% ( 10) 00:10:37.528 8460.102 - 8519.680: 0.2728% ( 19) 00:10:37.528 8519.680 - 8579.258: 0.5374% ( 32) 00:10:37.528 8579.258 - 8638.836: 0.8350% ( 36) 00:10:37.528 8638.836 - 8698.415: 1.1657% ( 40) 00:10:37.528 8698.415 - 8757.993: 1.8188% ( 79) 00:10:37.528 8757.993 - 8817.571: 2.6455% ( 100) 00:10:37.528 8817.571 - 8877.149: 3.5384% ( 108) 00:10:37.528 8877.149 - 8936.727: 4.5222% ( 119) 00:10:37.528 8936.727 - 8996.305: 5.6465% ( 136) 00:10:37.528 8996.305 - 9055.884: 6.7626% ( 135) 00:10:37.528 9055.884 - 9115.462: 7.8869% ( 136) 00:10:37.528 9115.462 - 9175.040: 8.9782% ( 132) 00:10:37.528 9175.040 - 9234.618: 10.1273% ( 139) 00:10:37.528 9234.618 - 9294.196: 11.2765% ( 139) 00:10:37.528 9294.196 - 9353.775: 12.4917% ( 147) 00:10:37.528 9353.775 - 9413.353: 13.7731% ( 155) 00:10:37.528 9413.353 - 9472.931: 15.1290% ( 164) 00:10:37.528 9472.931 - 9532.509: 16.5179% ( 168) 00:10:37.528 9532.509 - 9592.087: 17.9894% ( 178) 00:10:37.528 9592.087 - 9651.665: 19.7090% ( 208) 00:10:37.528 9651.665 - 9711.244: 21.4451% ( 210) 00:10:37.528 9711.244 - 9770.822: 23.3052% ( 225) 00:10:37.528 9770.822 - 9830.400: 25.3803% ( 251) 00:10:37.528 9830.400 - 9889.978: 27.6290% ( 272) 00:10:37.528 9889.978 - 9949.556: 30.1091% ( 300) 00:10:37.528 9949.556 - 10009.135: 32.3909% ( 276) 00:10:37.528 10009.135 - 10068.713: 34.7966% ( 291) 00:10:37.528 10068.713 - 10128.291: 37.3429% ( 308) 00:10:37.528 10128.291 - 10187.869: 40.2282% ( 349) 00:10:37.528 10187.869 - 10247.447: 43.1217% ( 350) 00:10:37.528 10247.447 - 10307.025: 46.1227% ( 363) 00:10:37.528 10307.025 - 10366.604: 49.3552% ( 391) 00:10:37.528 10366.604 - 10426.182: 52.4802% ( 378) 00:10:37.528 10426.182 - 10485.760: 55.6465% ( 383) 00:10:37.528 10485.760 - 10545.338: 58.8955% ( 393) 00:10:37.528 10545.338 - 10604.916: 62.0701% ( 384) 00:10:37.528 10604.916 - 10664.495: 65.1538% ( 373) 00:10:37.528 10664.495 - 10724.073: 68.2540% ( 375) 00:10:37.528 10724.073 - 10783.651: 71.2219% ( 359) 00:10:37.528 10783.651 - 10843.229: 73.9997% ( 336) 00:10:37.528 10843.229 - 10902.807: 76.6452% ( 320) 00:10:37.528 10902.807 - 10962.385: 79.2080% ( 310) 00:10:37.528 10962.385 - 11021.964: 81.5228% ( 280) 00:10:37.528 11021.964 - 11081.542: 83.6640% ( 259) 00:10:37.528 11081.542 - 11141.120: 85.5324% ( 226) 00:10:37.528 11141.120 - 11200.698: 87.1197% ( 192) 00:10:37.528 11200.698 - 11260.276: 88.4921% ( 166) 00:10:37.528 11260.276 - 11319.855: 89.6412% ( 139) 00:10:37.528 11319.855 - 11379.433: 90.6911% ( 127) 00:10:37.528 11379.433 - 11439.011: 91.5344% ( 102) 00:10:37.528 11439.011 - 11498.589: 92.3198% ( 95) 00:10:37.528 11498.589 - 11558.167: 92.9894% ( 81) 00:10:37.528 11558.167 - 11617.745: 93.5433% ( 67) 00:10:37.528 11617.745 - 11677.324: 93.9815% ( 53) 00:10:37.528 11677.324 - 11736.902: 94.3783% ( 48) 00:10:37.528 11736.902 - 11796.480: 94.7255% ( 42) 00:10:37.528 11796.480 - 11856.058: 95.0231% ( 36) 00:10:37.528 11856.058 - 11915.636: 95.2629% ( 29) 00:10:37.528 11915.636 - 11975.215: 95.4861% ( 27) 00:10:37.528 11975.215 - 12034.793: 95.6928% ( 25) 00:10:37.528 12034.793 - 12094.371: 95.9408% ( 30) 00:10:37.528 12094.371 - 12153.949: 96.1640% ( 27) 00:10:37.528 12153.949 - 12213.527: 96.3790% ( 26) 00:10:37.528 12213.527 - 12273.105: 96.5939% ( 26) 00:10:37.528 12273.105 - 12332.684: 96.7593% ( 20) 00:10:37.528 12332.684 - 12392.262: 96.9329% ( 21) 00:10:37.528 12392.262 - 12451.840: 97.0734% ( 17) 00:10:37.528 12451.840 - 12511.418: 97.2305% ( 19) 00:10:37.528 12511.418 - 12570.996: 97.3214% ( 11) 00:10:37.528 12570.996 - 12630.575: 97.4124% ( 11) 00:10:37.528 12630.575 - 12690.153: 97.4950% ( 10) 00:10:37.528 12690.153 - 12749.731: 97.5529% ( 7) 00:10:37.528 12749.731 - 12809.309: 97.6108% ( 7) 00:10:37.528 12809.309 - 12868.887: 97.6769% ( 8) 00:10:37.528 12868.887 - 12928.465: 97.7431% ( 8) 00:10:37.528 12928.465 - 12988.044: 97.7927% ( 6) 00:10:37.528 12988.044 - 13047.622: 97.8423% ( 6) 00:10:37.528 13047.622 - 13107.200: 97.8588% ( 2) 00:10:37.528 13107.200 - 13166.778: 97.8836% ( 3) 00:10:37.528 13524.247 - 13583.825: 97.9001% ( 2) 00:10:37.528 13583.825 - 13643.404: 97.9249% ( 3) 00:10:37.528 13643.404 - 13702.982: 97.9663% ( 5) 00:10:37.528 13702.982 - 13762.560: 98.0241% ( 7) 00:10:37.528 13762.560 - 13822.138: 98.0737% ( 6) 00:10:37.528 13822.138 - 13881.716: 98.1316% ( 7) 00:10:37.528 13881.716 - 13941.295: 98.1812% ( 6) 00:10:37.528 13941.295 - 14000.873: 98.2308% ( 6) 00:10:37.528 14000.873 - 14060.451: 98.2887% ( 7) 00:10:37.528 14060.451 - 14120.029: 98.3466% ( 7) 00:10:37.528 14120.029 - 14179.607: 98.3962% ( 6) 00:10:37.528 14179.607 - 14239.185: 98.4458% ( 6) 00:10:37.528 14239.185 - 14298.764: 98.4954% ( 6) 00:10:37.528 14298.764 - 14358.342: 98.5450% ( 6) 00:10:37.528 14358.342 - 14417.920: 98.6028% ( 7) 00:10:37.528 14417.920 - 14477.498: 98.6524% ( 6) 00:10:37.528 14477.498 - 14537.076: 98.7021% ( 6) 00:10:37.528 14537.076 - 14596.655: 98.7599% ( 7) 00:10:37.528 14596.655 - 14656.233: 98.8095% ( 6) 00:10:37.528 14656.233 - 14715.811: 98.8674% ( 7) 00:10:37.528 14715.811 - 14775.389: 98.9005% ( 4) 00:10:37.528 14775.389 - 14834.967: 98.9335% ( 4) 00:10:37.528 14834.967 - 14894.545: 98.9418% ( 1) 00:10:37.528 26691.025 - 26810.182: 98.9501% ( 1) 00:10:37.528 26810.182 - 26929.338: 98.9749% ( 3) 00:10:37.528 26929.338 - 27048.495: 98.9997% ( 3) 00:10:37.528 27048.495 - 27167.651: 99.0245% ( 3) 00:10:37.528 27167.651 - 27286.807: 99.0493% ( 3) 00:10:37.528 27286.807 - 27405.964: 99.0741% ( 3) 00:10:37.528 27405.964 - 27525.120: 99.0989% ( 3) 00:10:37.528 27525.120 - 27644.276: 99.1319% ( 4) 00:10:37.528 27644.276 - 27763.433: 99.1567% ( 3) 00:10:37.528 27763.433 - 27882.589: 99.1815% ( 3) 00:10:37.528 27882.589 - 28001.745: 99.2146% ( 4) 00:10:37.528 28001.745 - 28120.902: 99.2394% ( 3) 00:10:37.528 28120.902 - 28240.058: 99.2725% ( 4) 00:10:37.528 28240.058 - 28359.215: 99.2973% ( 3) 00:10:37.528 28359.215 - 28478.371: 99.3221% ( 3) 00:10:37.528 28478.371 - 28597.527: 99.3552% ( 4) 00:10:37.528 28597.527 - 28716.684: 99.3800% ( 3) 00:10:37.528 28716.684 - 28835.840: 99.4130% ( 4) 00:10:37.528 28835.840 - 28954.996: 99.4378% ( 3) 00:10:37.528 28954.996 - 29074.153: 99.4709% ( 4) 00:10:37.528 33602.095 - 33840.407: 99.4874% ( 2) 00:10:37.528 33840.407 - 34078.720: 99.5288% ( 5) 00:10:37.528 34078.720 - 34317.033: 99.5866% ( 7) 00:10:37.528 34317.033 - 34555.345: 99.6362% ( 6) 00:10:37.528 34555.345 - 34793.658: 99.6941% ( 7) 00:10:37.528 34793.658 - 35031.971: 99.7520% ( 7) 00:10:37.528 35031.971 - 35270.284: 99.8099% ( 7) 00:10:37.528 35270.284 - 35508.596: 99.8677% ( 7) 00:10:37.528 35508.596 - 35746.909: 99.9173% ( 6) 00:10:37.528 35746.909 - 35985.222: 99.9669% ( 6) 00:10:37.528 35985.222 - 36223.535: 100.0000% ( 4) 00:10:37.528 00:10:37.528 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:37.528 ============================================================================== 00:10:37.528 Range in us Cumulative IO count 00:10:37.528 8340.945 - 8400.524: 0.0496% ( 6) 00:10:37.529 8400.524 - 8460.102: 0.1405% ( 11) 00:10:37.529 8460.102 - 8519.680: 0.3390% ( 24) 00:10:37.529 8519.680 - 8579.258: 0.5787% ( 29) 00:10:37.529 8579.258 - 8638.836: 0.9094% ( 40) 00:10:37.529 8638.836 - 8698.415: 1.3393% ( 52) 00:10:37.529 8698.415 - 8757.993: 1.9097% ( 69) 00:10:37.529 8757.993 - 8817.571: 2.6786% ( 93) 00:10:37.529 8817.571 - 8877.149: 3.5880% ( 110) 00:10:37.529 8877.149 - 8936.727: 4.5966% ( 122) 00:10:37.529 8936.727 - 8996.305: 5.6052% ( 122) 00:10:37.529 8996.305 - 9055.884: 6.7130% ( 134) 00:10:37.529 9055.884 - 9115.462: 7.7877% ( 130) 00:10:37.529 9115.462 - 9175.040: 8.8707% ( 131) 00:10:37.529 9175.040 - 9234.618: 9.9702% ( 133) 00:10:37.529 9234.618 - 9294.196: 11.1276% ( 140) 00:10:37.529 9294.196 - 9353.775: 12.4752% ( 163) 00:10:37.529 9353.775 - 9413.353: 13.7566% ( 155) 00:10:37.529 9413.353 - 9472.931: 15.0463% ( 156) 00:10:37.529 9472.931 - 9532.509: 16.4683% ( 172) 00:10:37.529 9532.509 - 9592.087: 17.9729% ( 182) 00:10:37.529 9592.087 - 9651.665: 19.5850% ( 195) 00:10:37.529 9651.665 - 9711.244: 21.2963% ( 207) 00:10:37.529 9711.244 - 9770.822: 23.2060% ( 231) 00:10:37.529 9770.822 - 9830.400: 25.1653% ( 237) 00:10:37.529 9830.400 - 9889.978: 27.3396% ( 263) 00:10:37.529 9889.978 - 9949.556: 29.6875% ( 284) 00:10:37.529 9949.556 - 10009.135: 31.9940% ( 279) 00:10:37.529 10009.135 - 10068.713: 34.3171% ( 281) 00:10:37.529 10068.713 - 10128.291: 36.8800% ( 310) 00:10:37.529 10128.291 - 10187.869: 39.5503% ( 323) 00:10:37.529 10187.869 - 10247.447: 42.5017% ( 357) 00:10:37.529 10247.447 - 10307.025: 45.3704% ( 347) 00:10:37.529 10307.025 - 10366.604: 48.4044% ( 367) 00:10:37.529 10366.604 - 10426.182: 51.6700% ( 395) 00:10:37.529 10426.182 - 10485.760: 54.7867% ( 377) 00:10:37.529 10485.760 - 10545.338: 57.9944% ( 388) 00:10:37.529 10545.338 - 10604.916: 61.1442% ( 381) 00:10:37.529 10604.916 - 10664.495: 64.4345% ( 398) 00:10:37.529 10664.495 - 10724.073: 67.5678% ( 379) 00:10:37.529 10724.073 - 10783.651: 70.6019% ( 367) 00:10:37.529 10783.651 - 10843.229: 73.3135% ( 328) 00:10:37.529 10843.229 - 10902.807: 75.9342% ( 317) 00:10:37.529 10902.807 - 10962.385: 78.4392% ( 303) 00:10:37.529 10962.385 - 11021.964: 80.8118% ( 287) 00:10:37.529 11021.964 - 11081.542: 82.9448% ( 258) 00:10:37.529 11081.542 - 11141.120: 84.7222% ( 215) 00:10:37.529 11141.120 - 11200.698: 86.3674% ( 199) 00:10:37.529 11200.698 - 11260.276: 87.8142% ( 175) 00:10:37.529 11260.276 - 11319.855: 89.0046% ( 144) 00:10:37.529 11319.855 - 11379.433: 90.0876% ( 131) 00:10:37.529 11379.433 - 11439.011: 91.0384% ( 115) 00:10:37.529 11439.011 - 11498.589: 91.8816% ( 102) 00:10:37.529 11498.589 - 11558.167: 92.6009% ( 87) 00:10:37.529 11558.167 - 11617.745: 93.2126% ( 74) 00:10:37.529 11617.745 - 11677.324: 93.7913% ( 70) 00:10:37.529 11677.324 - 11736.902: 94.2874% ( 60) 00:10:37.529 11736.902 - 11796.480: 94.6925% ( 49) 00:10:37.529 11796.480 - 11856.058: 95.0314% ( 41) 00:10:37.529 11856.058 - 11915.636: 95.3456% ( 38) 00:10:37.529 11915.636 - 11975.215: 95.6515% ( 37) 00:10:37.529 11975.215 - 12034.793: 95.9243% ( 33) 00:10:37.529 12034.793 - 12094.371: 96.1888% ( 32) 00:10:37.529 12094.371 - 12153.949: 96.4368% ( 30) 00:10:37.529 12153.949 - 12213.527: 96.6931% ( 31) 00:10:37.529 12213.527 - 12273.105: 96.9246% ( 28) 00:10:37.529 12273.105 - 12332.684: 97.1478% ( 27) 00:10:37.529 12332.684 - 12392.262: 97.3545% ( 25) 00:10:37.529 12392.262 - 12451.840: 97.5364% ( 22) 00:10:37.529 12451.840 - 12511.418: 97.7183% ( 22) 00:10:37.529 12511.418 - 12570.996: 97.8836% ( 20) 00:10:37.529 12570.996 - 12630.575: 97.9828% ( 12) 00:10:37.529 12630.575 - 12690.153: 98.0903% ( 13) 00:10:37.529 12690.153 - 12749.731: 98.1812% ( 11) 00:10:37.529 12749.731 - 12809.309: 98.2391% ( 7) 00:10:37.529 12809.309 - 12868.887: 98.2804% ( 5) 00:10:37.529 12868.887 - 12928.465: 98.3135% ( 4) 00:10:37.529 12928.465 - 12988.044: 98.3548% ( 5) 00:10:37.529 12988.044 - 13047.622: 98.3879% ( 4) 00:10:37.529 13047.622 - 13107.200: 98.4127% ( 3) 00:10:37.529 13643.404 - 13702.982: 98.4375% ( 3) 00:10:37.529 13702.982 - 13762.560: 98.4706% ( 4) 00:10:37.529 13762.560 - 13822.138: 98.4871% ( 2) 00:10:37.529 13822.138 - 13881.716: 98.5119% ( 3) 00:10:37.529 13881.716 - 13941.295: 98.5284% ( 2) 00:10:37.529 13941.295 - 14000.873: 98.5615% ( 4) 00:10:37.529 14000.873 - 14060.451: 98.5863% ( 3) 00:10:37.529 14060.451 - 14120.029: 98.6028% ( 2) 00:10:37.529 14120.029 - 14179.607: 98.6359% ( 4) 00:10:37.529 14179.607 - 14239.185: 98.6607% ( 3) 00:10:37.529 14239.185 - 14298.764: 98.6855% ( 3) 00:10:37.529 14298.764 - 14358.342: 98.7186% ( 4) 00:10:37.529 14358.342 - 14417.920: 98.7434% ( 3) 00:10:37.529 14417.920 - 14477.498: 98.7682% ( 3) 00:10:37.529 14477.498 - 14537.076: 98.8013% ( 4) 00:10:37.529 14537.076 - 14596.655: 98.8261% ( 3) 00:10:37.529 14596.655 - 14656.233: 98.8426% ( 2) 00:10:37.529 14656.233 - 14715.811: 98.8674% ( 3) 00:10:37.529 14715.811 - 14775.389: 98.8922% ( 3) 00:10:37.529 14775.389 - 14834.967: 98.9170% ( 3) 00:10:37.529 14834.967 - 14894.545: 98.9335% ( 2) 00:10:37.529 14894.545 - 14954.124: 98.9418% ( 1) 00:10:37.529 24427.055 - 24546.211: 98.9583% ( 2) 00:10:37.529 24546.211 - 24665.367: 98.9831% ( 3) 00:10:37.529 24665.367 - 24784.524: 99.0162% ( 4) 00:10:37.529 24784.524 - 24903.680: 99.0410% ( 3) 00:10:37.529 24903.680 - 25022.836: 99.0658% ( 3) 00:10:37.529 25022.836 - 25141.993: 99.0989% ( 4) 00:10:37.529 25141.993 - 25261.149: 99.1319% ( 4) 00:10:37.529 25261.149 - 25380.305: 99.1567% ( 3) 00:10:37.529 25380.305 - 25499.462: 99.1815% ( 3) 00:10:37.529 25499.462 - 25618.618: 99.2063% ( 3) 00:10:37.529 25618.618 - 25737.775: 99.2394% ( 4) 00:10:37.529 25737.775 - 25856.931: 99.2642% ( 3) 00:10:37.529 25856.931 - 25976.087: 99.2890% ( 3) 00:10:37.529 25976.087 - 26095.244: 99.3221% ( 4) 00:10:37.529 26095.244 - 26214.400: 99.3552% ( 4) 00:10:37.529 26214.400 - 26333.556: 99.3800% ( 3) 00:10:37.529 26333.556 - 26452.713: 99.4048% ( 3) 00:10:37.529 26452.713 - 26571.869: 99.4296% ( 3) 00:10:37.529 26571.869 - 26691.025: 99.4626% ( 4) 00:10:37.529 26691.025 - 26810.182: 99.4709% ( 1) 00:10:37.529 31457.280 - 31695.593: 99.5288% ( 7) 00:10:37.529 31695.593 - 31933.905: 99.5701% ( 5) 00:10:37.529 31933.905 - 32172.218: 99.6280% ( 7) 00:10:37.529 32172.218 - 32410.531: 99.6858% ( 7) 00:10:37.529 32410.531 - 32648.844: 99.7354% ( 6) 00:10:37.529 32648.844 - 32887.156: 99.7933% ( 7) 00:10:37.529 32887.156 - 33125.469: 99.8512% ( 7) 00:10:37.529 33125.469 - 33363.782: 99.9008% ( 6) 00:10:37.529 33363.782 - 33602.095: 99.9504% ( 6) 00:10:37.529 33602.095 - 33840.407: 100.0000% ( 6) 00:10:37.529 00:10:37.529 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:37.529 ============================================================================== 00:10:37.529 Range in us Cumulative IO count 00:10:37.529 8340.945 - 8400.524: 0.0248% ( 3) 00:10:37.529 8400.524 - 8460.102: 0.0661% ( 5) 00:10:37.529 8460.102 - 8519.680: 0.2149% ( 18) 00:10:37.529 8519.680 - 8579.258: 0.5126% ( 36) 00:10:37.529 8579.258 - 8638.836: 0.8267% ( 38) 00:10:37.529 8638.836 - 8698.415: 1.2318% ( 49) 00:10:37.529 8698.415 - 8757.993: 1.7526% ( 63) 00:10:37.529 8757.993 - 8817.571: 2.4802% ( 88) 00:10:37.529 8817.571 - 8877.149: 3.4226% ( 114) 00:10:37.529 8877.149 - 8936.727: 4.4064% ( 119) 00:10:37.529 8936.727 - 8996.305: 5.5142% ( 134) 00:10:37.529 8996.305 - 9055.884: 6.6303% ( 135) 00:10:37.529 9055.884 - 9115.462: 7.7877% ( 140) 00:10:37.529 9115.462 - 9175.040: 8.9286% ( 138) 00:10:37.529 9175.040 - 9234.618: 10.0777% ( 139) 00:10:37.529 9234.618 - 9294.196: 11.2930% ( 147) 00:10:37.529 9294.196 - 9353.775: 12.5744% ( 155) 00:10:37.529 9353.775 - 9413.353: 13.8641% ( 156) 00:10:37.529 9413.353 - 9472.931: 15.2860% ( 172) 00:10:37.529 9472.931 - 9532.509: 16.6501% ( 165) 00:10:37.529 9532.509 - 9592.087: 18.1713% ( 184) 00:10:37.529 9592.087 - 9651.665: 19.7834% ( 195) 00:10:37.529 9651.665 - 9711.244: 21.6353% ( 224) 00:10:37.529 9711.244 - 9770.822: 23.6028% ( 238) 00:10:37.529 9770.822 - 9830.400: 25.7358% ( 258) 00:10:37.529 9830.400 - 9889.978: 27.9101% ( 263) 00:10:37.529 9889.978 - 9949.556: 30.2579% ( 284) 00:10:37.529 9949.556 - 10009.135: 32.5397% ( 276) 00:10:37.529 10009.135 - 10068.713: 34.8049% ( 274) 00:10:37.529 10068.713 - 10128.291: 37.3512% ( 308) 00:10:37.529 10128.291 - 10187.869: 39.9471% ( 314) 00:10:37.529 10187.869 - 10247.447: 42.6587% ( 328) 00:10:37.529 10247.447 - 10307.025: 45.5936% ( 355) 00:10:37.529 10307.025 - 10366.604: 48.6772% ( 373) 00:10:37.529 10366.604 - 10426.182: 51.7444% ( 371) 00:10:37.529 10426.182 - 10485.760: 54.7784% ( 367) 00:10:37.529 10485.760 - 10545.338: 58.0771% ( 399) 00:10:37.529 10545.338 - 10604.916: 61.3343% ( 394) 00:10:37.529 10604.916 - 10664.495: 64.4345% ( 375) 00:10:37.529 10664.495 - 10724.073: 67.4769% ( 368) 00:10:37.529 10724.073 - 10783.651: 70.3125% ( 343) 00:10:37.529 10783.651 - 10843.229: 73.1233% ( 340) 00:10:37.529 10843.229 - 10902.807: 75.7027% ( 312) 00:10:37.529 10902.807 - 10962.385: 78.2655% ( 310) 00:10:37.529 10962.385 - 11021.964: 80.5556% ( 277) 00:10:37.529 11021.964 - 11081.542: 82.6141% ( 249) 00:10:37.529 11081.542 - 11141.120: 84.5155% ( 230) 00:10:37.529 11141.120 - 11200.698: 86.1607% ( 199) 00:10:37.529 11200.698 - 11260.276: 87.6653% ( 182) 00:10:37.529 11260.276 - 11319.855: 88.8972% ( 149) 00:10:37.529 11319.855 - 11379.433: 89.9554% ( 128) 00:10:37.529 11379.433 - 11439.011: 90.9474% ( 120) 00:10:37.529 11439.011 - 11498.589: 91.7907% ( 102) 00:10:37.529 11498.589 - 11558.167: 92.5017% ( 86) 00:10:37.530 11558.167 - 11617.745: 93.0969% ( 72) 00:10:37.530 11617.745 - 11677.324: 93.5516% ( 55) 00:10:37.530 11677.324 - 11736.902: 94.0311% ( 58) 00:10:37.530 11736.902 - 11796.480: 94.4031% ( 45) 00:10:37.530 11796.480 - 11856.058: 94.7338% ( 40) 00:10:37.530 11856.058 - 11915.636: 95.0810% ( 42) 00:10:37.530 11915.636 - 11975.215: 95.3704% ( 35) 00:10:37.530 11975.215 - 12034.793: 95.6349% ( 32) 00:10:37.530 12034.793 - 12094.371: 95.8747% ( 29) 00:10:37.530 12094.371 - 12153.949: 96.1310% ( 31) 00:10:37.530 12153.949 - 12213.527: 96.3707% ( 29) 00:10:37.530 12213.527 - 12273.105: 96.6270% ( 31) 00:10:37.530 12273.105 - 12332.684: 96.8833% ( 31) 00:10:37.530 12332.684 - 12392.262: 97.0569% ( 21) 00:10:37.530 12392.262 - 12451.840: 97.2388% ( 22) 00:10:37.530 12451.840 - 12511.418: 97.4289% ( 23) 00:10:37.530 12511.418 - 12570.996: 97.5446% ( 14) 00:10:37.530 12570.996 - 12630.575: 97.6521% ( 13) 00:10:37.530 12630.575 - 12690.153: 97.7679% ( 14) 00:10:37.530 12690.153 - 12749.731: 97.8836% ( 14) 00:10:37.530 12749.731 - 12809.309: 98.0076% ( 15) 00:10:37.530 12809.309 - 12868.887: 98.1068% ( 12) 00:10:37.530 12868.887 - 12928.465: 98.1895% ( 10) 00:10:37.530 12928.465 - 12988.044: 98.2391% ( 6) 00:10:37.530 12988.044 - 13047.622: 98.2804% ( 5) 00:10:37.530 13047.622 - 13107.200: 98.3135% ( 4) 00:10:37.530 13107.200 - 13166.778: 98.3466% ( 4) 00:10:37.530 13166.778 - 13226.356: 98.3796% ( 4) 00:10:37.530 13226.356 - 13285.935: 98.4127% ( 4) 00:10:37.530 13345.513 - 13405.091: 98.4292% ( 2) 00:10:37.530 13405.091 - 13464.669: 98.4540% ( 3) 00:10:37.530 13464.669 - 13524.247: 98.4788% ( 3) 00:10:37.530 13524.247 - 13583.825: 98.5036% ( 3) 00:10:37.530 13583.825 - 13643.404: 98.5202% ( 2) 00:10:37.530 13643.404 - 13702.982: 98.5450% ( 3) 00:10:37.530 13702.982 - 13762.560: 98.5698% ( 3) 00:10:37.530 13762.560 - 13822.138: 98.5946% ( 3) 00:10:37.530 13822.138 - 13881.716: 98.6111% ( 2) 00:10:37.530 13881.716 - 13941.295: 98.6359% ( 3) 00:10:37.530 13941.295 - 14000.873: 98.6524% ( 2) 00:10:37.530 14000.873 - 14060.451: 98.6690% ( 2) 00:10:37.530 14060.451 - 14120.029: 98.6938% ( 3) 00:10:37.530 14120.029 - 14179.607: 98.7186% ( 3) 00:10:37.530 14179.607 - 14239.185: 98.7434% ( 3) 00:10:37.530 14239.185 - 14298.764: 98.7682% ( 3) 00:10:37.530 14298.764 - 14358.342: 98.7930% ( 3) 00:10:37.530 14358.342 - 14417.920: 98.8178% ( 3) 00:10:37.530 14417.920 - 14477.498: 98.8426% ( 3) 00:10:37.530 14477.498 - 14537.076: 98.8674% ( 3) 00:10:37.530 14537.076 - 14596.655: 98.8922% ( 3) 00:10:37.530 14596.655 - 14656.233: 98.9087% ( 2) 00:10:37.530 14656.233 - 14715.811: 98.9335% ( 3) 00:10:37.530 14715.811 - 14775.389: 98.9418% ( 1) 00:10:37.530 22163.084 - 22282.240: 98.9666% ( 3) 00:10:37.530 22282.240 - 22401.396: 98.9914% ( 3) 00:10:37.530 22401.396 - 22520.553: 99.0245% ( 4) 00:10:37.530 22520.553 - 22639.709: 99.0493% ( 3) 00:10:37.530 22639.709 - 22758.865: 99.0741% ( 3) 00:10:37.530 22758.865 - 22878.022: 99.1071% ( 4) 00:10:37.530 22878.022 - 22997.178: 99.1319% ( 3) 00:10:37.530 22997.178 - 23116.335: 99.1650% ( 4) 00:10:37.530 23116.335 - 23235.491: 99.1898% ( 3) 00:10:37.530 23235.491 - 23354.647: 99.2146% ( 3) 00:10:37.530 23354.647 - 23473.804: 99.2477% ( 4) 00:10:37.530 23473.804 - 23592.960: 99.2725% ( 3) 00:10:37.530 23592.960 - 23712.116: 99.2973% ( 3) 00:10:37.530 23712.116 - 23831.273: 99.3304% ( 4) 00:10:37.530 23831.273 - 23950.429: 99.3552% ( 3) 00:10:37.530 23950.429 - 24069.585: 99.3800% ( 3) 00:10:37.530 24069.585 - 24188.742: 99.4130% ( 4) 00:10:37.530 24188.742 - 24307.898: 99.4378% ( 3) 00:10:37.530 24307.898 - 24427.055: 99.4709% ( 4) 00:10:37.530 29074.153 - 29193.309: 99.4792% ( 1) 00:10:37.530 29193.309 - 29312.465: 99.5040% ( 3) 00:10:37.530 29312.465 - 29431.622: 99.5288% ( 3) 00:10:37.530 29431.622 - 29550.778: 99.5536% ( 3) 00:10:37.530 29550.778 - 29669.935: 99.5866% ( 4) 00:10:37.530 29669.935 - 29789.091: 99.6032% ( 2) 00:10:37.530 29789.091 - 29908.247: 99.6280% ( 3) 00:10:37.530 29908.247 - 30027.404: 99.6610% ( 4) 00:10:37.530 30027.404 - 30146.560: 99.6858% ( 3) 00:10:37.530 30146.560 - 30265.716: 99.7189% ( 4) 00:10:37.530 30265.716 - 30384.873: 99.7437% ( 3) 00:10:37.530 30384.873 - 30504.029: 99.7685% ( 3) 00:10:37.530 30504.029 - 30742.342: 99.8264% ( 7) 00:10:37.530 30742.342 - 30980.655: 99.8843% ( 7) 00:10:37.530 30980.655 - 31218.967: 99.9421% ( 7) 00:10:37.530 31218.967 - 31457.280: 100.0000% ( 7) 00:10:37.530 00:10:37.530 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:37.530 ============================================================================== 00:10:37.530 Range in us Cumulative IO count 00:10:37.530 8340.945 - 8400.524: 0.0248% ( 3) 00:10:37.530 8400.524 - 8460.102: 0.0909% ( 8) 00:10:37.530 8460.102 - 8519.680: 0.2149% ( 15) 00:10:37.530 8519.680 - 8579.258: 0.4464% ( 28) 00:10:37.530 8579.258 - 8638.836: 0.7523% ( 37) 00:10:37.530 8638.836 - 8698.415: 1.1987% ( 54) 00:10:37.530 8698.415 - 8757.993: 1.7030% ( 61) 00:10:37.530 8757.993 - 8817.571: 2.3562% ( 79) 00:10:37.530 8817.571 - 8877.149: 3.3317% ( 118) 00:10:37.530 8877.149 - 8936.727: 4.3733% ( 126) 00:10:37.530 8936.727 - 8996.305: 5.4150% ( 126) 00:10:37.530 8996.305 - 9055.884: 6.4484% ( 125) 00:10:37.530 9055.884 - 9115.462: 7.5066% ( 128) 00:10:37.530 9115.462 - 9175.040: 8.6723% ( 141) 00:10:37.530 9175.040 - 9234.618: 9.8380% ( 141) 00:10:37.530 9234.618 - 9294.196: 10.9871% ( 139) 00:10:37.530 9294.196 - 9353.775: 12.2768% ( 156) 00:10:37.530 9353.775 - 9413.353: 13.5913% ( 159) 00:10:37.530 9413.353 - 9472.931: 14.8644% ( 154) 00:10:37.530 9472.931 - 9532.509: 16.1789% ( 159) 00:10:37.530 9532.509 - 9592.087: 17.6339% ( 176) 00:10:37.530 9592.087 - 9651.665: 19.3370% ( 206) 00:10:37.530 9651.665 - 9711.244: 21.0731% ( 210) 00:10:37.530 9711.244 - 9770.822: 22.9745% ( 230) 00:10:37.530 9770.822 - 9830.400: 25.1323% ( 261) 00:10:37.530 9830.400 - 9889.978: 27.4140% ( 276) 00:10:37.530 9889.978 - 9949.556: 29.8611% ( 296) 00:10:37.530 9949.556 - 10009.135: 32.2338% ( 287) 00:10:37.530 10009.135 - 10068.713: 34.5486% ( 280) 00:10:37.530 10068.713 - 10128.291: 37.0784% ( 306) 00:10:37.530 10128.291 - 10187.869: 39.7569% ( 324) 00:10:37.530 10187.869 - 10247.447: 42.6753% ( 353) 00:10:37.530 10247.447 - 10307.025: 45.7424% ( 371) 00:10:37.530 10307.025 - 10366.604: 48.7847% ( 368) 00:10:37.530 10366.604 - 10426.182: 52.0089% ( 390) 00:10:37.530 10426.182 - 10485.760: 55.2001% ( 386) 00:10:37.530 10485.760 - 10545.338: 58.4408% ( 392) 00:10:37.530 10545.338 - 10604.916: 61.6815% ( 392) 00:10:37.530 10604.916 - 10664.495: 64.9140% ( 391) 00:10:37.530 10664.495 - 10724.073: 68.0886% ( 384) 00:10:37.530 10724.073 - 10783.651: 71.0152% ( 354) 00:10:37.530 10783.651 - 10843.229: 73.8013% ( 337) 00:10:37.530 10843.229 - 10902.807: 76.5790% ( 336) 00:10:37.530 10902.807 - 10962.385: 79.0344% ( 297) 00:10:37.530 10962.385 - 11021.964: 81.2748% ( 271) 00:10:37.530 11021.964 - 11081.542: 83.3251% ( 248) 00:10:37.530 11081.542 - 11141.120: 85.2100% ( 228) 00:10:37.530 11141.120 - 11200.698: 86.8386% ( 197) 00:10:37.530 11200.698 - 11260.276: 88.1366% ( 157) 00:10:37.530 11260.276 - 11319.855: 89.2278% ( 132) 00:10:37.530 11319.855 - 11379.433: 90.3439% ( 135) 00:10:37.530 11379.433 - 11439.011: 91.1706% ( 100) 00:10:37.530 11439.011 - 11498.589: 91.9478% ( 94) 00:10:37.530 11498.589 - 11558.167: 92.6505% ( 85) 00:10:37.530 11558.167 - 11617.745: 93.1796% ( 64) 00:10:37.530 11617.745 - 11677.324: 93.6260% ( 54) 00:10:37.530 11677.324 - 11736.902: 93.9732% ( 42) 00:10:37.530 11736.902 - 11796.480: 94.3204% ( 42) 00:10:37.530 11796.480 - 11856.058: 94.6511% ( 40) 00:10:37.530 11856.058 - 11915.636: 94.9487% ( 36) 00:10:37.530 11915.636 - 11975.215: 95.2133% ( 32) 00:10:37.530 11975.215 - 12034.793: 95.4282% ( 26) 00:10:37.530 12034.793 - 12094.371: 95.6597% ( 28) 00:10:37.530 12094.371 - 12153.949: 95.8581% ( 24) 00:10:37.530 12153.949 - 12213.527: 96.0731% ( 26) 00:10:37.530 12213.527 - 12273.105: 96.2798% ( 25) 00:10:37.530 12273.105 - 12332.684: 96.4451% ( 20) 00:10:37.530 12332.684 - 12392.262: 96.6104% ( 20) 00:10:37.530 12392.262 - 12451.840: 96.8171% ( 25) 00:10:37.530 12451.840 - 12511.418: 96.9825% ( 20) 00:10:37.530 12511.418 - 12570.996: 97.1230% ( 17) 00:10:37.530 12570.996 - 12630.575: 97.2388% ( 14) 00:10:37.530 12630.575 - 12690.153: 97.3380% ( 12) 00:10:37.530 12690.153 - 12749.731: 97.4537% ( 14) 00:10:37.530 12749.731 - 12809.309: 97.5612% ( 13) 00:10:37.530 12809.309 - 12868.887: 97.6356% ( 9) 00:10:37.530 12868.887 - 12928.465: 97.7183% ( 10) 00:10:37.530 12928.465 - 12988.044: 97.8257% ( 13) 00:10:37.530 12988.044 - 13047.622: 97.9415% ( 14) 00:10:37.530 13047.622 - 13107.200: 98.0572% ( 14) 00:10:37.530 13107.200 - 13166.778: 98.1647% ( 13) 00:10:37.530 13166.778 - 13226.356: 98.2722% ( 13) 00:10:37.530 13226.356 - 13285.935: 98.3962% ( 15) 00:10:37.530 13285.935 - 13345.513: 98.4954% ( 12) 00:10:37.530 13345.513 - 13405.091: 98.5780% ( 10) 00:10:37.530 13405.091 - 13464.669: 98.6194% ( 5) 00:10:37.530 13464.669 - 13524.247: 98.6442% ( 3) 00:10:37.530 13524.247 - 13583.825: 98.6690% ( 3) 00:10:37.530 13583.825 - 13643.404: 98.6855% ( 2) 00:10:37.531 13643.404 - 13702.982: 98.7103% ( 3) 00:10:37.531 13702.982 - 13762.560: 98.7351% ( 3) 00:10:37.531 13762.560 - 13822.138: 98.7599% ( 3) 00:10:37.531 13822.138 - 13881.716: 98.7847% ( 3) 00:10:37.531 13881.716 - 13941.295: 98.7930% ( 1) 00:10:37.531 13941.295 - 14000.873: 98.8178% ( 3) 00:10:37.531 14000.873 - 14060.451: 98.8426% ( 3) 00:10:37.531 14060.451 - 14120.029: 98.8591% ( 2) 00:10:37.531 14120.029 - 14179.607: 98.8839% ( 3) 00:10:37.531 14179.607 - 14239.185: 98.9087% ( 3) 00:10:37.531 14239.185 - 14298.764: 98.9335% ( 3) 00:10:37.531 14298.764 - 14358.342: 98.9418% ( 1) 00:10:37.531 19899.113 - 20018.269: 98.9666% ( 3) 00:10:37.531 20018.269 - 20137.425: 98.9914% ( 3) 00:10:37.531 20137.425 - 20256.582: 99.0162% ( 3) 00:10:37.531 20256.582 - 20375.738: 99.0493% ( 4) 00:10:37.531 20375.738 - 20494.895: 99.0741% ( 3) 00:10:37.531 20494.895 - 20614.051: 99.1071% ( 4) 00:10:37.531 20614.051 - 20733.207: 99.1402% ( 4) 00:10:37.531 20733.207 - 20852.364: 99.1650% ( 3) 00:10:37.531 20852.364 - 20971.520: 99.1898% ( 3) 00:10:37.531 20971.520 - 21090.676: 99.2229% ( 4) 00:10:37.531 21090.676 - 21209.833: 99.2477% ( 3) 00:10:37.531 21209.833 - 21328.989: 99.2808% ( 4) 00:10:37.531 21328.989 - 21448.145: 99.3056% ( 3) 00:10:37.531 21448.145 - 21567.302: 99.3304% ( 3) 00:10:37.531 21567.302 - 21686.458: 99.3552% ( 3) 00:10:37.531 21686.458 - 21805.615: 99.3800% ( 3) 00:10:37.531 21805.615 - 21924.771: 99.4048% ( 3) 00:10:37.531 21924.771 - 22043.927: 99.4296% ( 3) 00:10:37.531 22043.927 - 22163.084: 99.4626% ( 4) 00:10:37.531 22163.084 - 22282.240: 99.4709% ( 1) 00:10:37.531 26810.182 - 26929.338: 99.4874% ( 2) 00:10:37.531 26929.338 - 27048.495: 99.5205% ( 4) 00:10:37.531 27048.495 - 27167.651: 99.5453% ( 3) 00:10:37.531 27167.651 - 27286.807: 99.5701% ( 3) 00:10:37.531 27286.807 - 27405.964: 99.6032% ( 4) 00:10:37.531 27405.964 - 27525.120: 99.6280% ( 3) 00:10:37.531 27525.120 - 27644.276: 99.6528% ( 3) 00:10:37.531 27644.276 - 27763.433: 99.6858% ( 4) 00:10:37.531 27763.433 - 27882.589: 99.7106% ( 3) 00:10:37.531 27882.589 - 28001.745: 99.7437% ( 4) 00:10:37.531 28001.745 - 28120.902: 99.7685% ( 3) 00:10:37.531 28120.902 - 28240.058: 99.7933% ( 3) 00:10:37.531 28240.058 - 28359.215: 99.8181% ( 3) 00:10:37.531 28359.215 - 28478.371: 99.8512% ( 4) 00:10:37.531 28478.371 - 28597.527: 99.8843% ( 4) 00:10:37.531 28597.527 - 28716.684: 99.9091% ( 3) 00:10:37.531 28716.684 - 28835.840: 99.9339% ( 3) 00:10:37.531 28835.840 - 28954.996: 99.9669% ( 4) 00:10:37.531 28954.996 - 29074.153: 99.9917% ( 3) 00:10:37.531 29074.153 - 29193.309: 100.0000% ( 1) 00:10:37.531 00:10:37.531 18:12:11 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:38.906 Initializing NVMe Controllers 00:10:38.906 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:38.906 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:38.906 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:38.906 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:38.906 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:38.906 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:38.906 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:38.906 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:38.906 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:38.906 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:38.906 Initialization complete. Launching workers. 00:10:38.906 ======================================================== 00:10:38.906 Latency(us) 00:10:38.906 Device Information : IOPS MiB/s Average min max 00:10:38.906 PCIE (0000:00:10.0) NSID 1 from core 0: 10262.59 120.26 12505.35 9728.92 44861.25 00:10:38.906 PCIE (0000:00:11.0) NSID 1 from core 0: 10262.59 120.26 12482.02 9759.64 42519.55 00:10:38.906 PCIE (0000:00:13.0) NSID 1 from core 0: 10262.59 120.26 12458.20 9951.99 41365.33 00:10:38.906 PCIE (0000:00:12.0) NSID 1 from core 0: 10262.59 120.26 12434.03 9807.80 39120.61 00:10:38.906 PCIE (0000:00:12.0) NSID 2 from core 0: 10262.59 120.26 12409.91 9935.41 37113.85 00:10:38.906 PCIE (0000:00:12.0) NSID 3 from core 0: 10326.33 121.01 12309.00 9872.97 27603.42 00:10:38.906 ======================================================== 00:10:38.906 Total : 61639.29 722.34 12432.96 9728.92 44861.25 00:10:38.906 00:10:38.906 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:38.906 ================================================================================= 00:10:38.906 1.00000% : 10128.291us 00:10:38.906 10.00000% : 10604.916us 00:10:38.906 25.00000% : 10962.385us 00:10:38.906 50.00000% : 11498.589us 00:10:38.906 75.00000% : 12451.840us 00:10:38.906 90.00000% : 14894.545us 00:10:38.906 95.00000% : 17039.360us 00:10:38.906 98.00000% : 23354.647us 00:10:38.906 99.00000% : 34793.658us 00:10:38.906 99.50000% : 43134.604us 00:10:38.906 99.90000% : 44564.480us 00:10:38.906 99.99000% : 45041.105us 00:10:38.906 99.99900% : 45041.105us 00:10:38.906 99.99990% : 45041.105us 00:10:38.906 99.99999% : 45041.105us 00:10:38.906 00:10:38.906 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:38.906 ================================================================================= 00:10:38.906 1.00000% : 10247.447us 00:10:38.906 10.00000% : 10783.651us 00:10:38.906 25.00000% : 11021.964us 00:10:38.906 50.00000% : 11379.433us 00:10:38.906 75.00000% : 12332.684us 00:10:38.906 90.00000% : 14715.811us 00:10:38.906 95.00000% : 17039.360us 00:10:38.906 98.00000% : 23354.647us 00:10:38.906 99.00000% : 32887.156us 00:10:38.906 99.50000% : 40989.789us 00:10:38.906 99.90000% : 42419.665us 00:10:38.906 99.99000% : 42657.978us 00:10:38.906 99.99900% : 42657.978us 00:10:38.906 99.99990% : 42657.978us 00:10:38.906 99.99999% : 42657.978us 00:10:38.906 00:10:38.906 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:38.906 ================================================================================= 00:10:38.906 1.00000% : 10247.447us 00:10:38.906 10.00000% : 10724.073us 00:10:38.906 25.00000% : 11021.964us 00:10:38.906 50.00000% : 11439.011us 00:10:38.906 75.00000% : 12392.262us 00:10:38.906 90.00000% : 14715.811us 00:10:38.906 95.00000% : 17039.360us 00:10:38.906 98.00000% : 23592.960us 00:10:38.906 99.00000% : 31457.280us 00:10:38.906 99.50000% : 39798.225us 00:10:38.906 99.90000% : 41228.102us 00:10:38.906 99.99000% : 41466.415us 00:10:38.906 99.99900% : 41466.415us 00:10:38.906 99.99990% : 41466.415us 00:10:38.906 99.99999% : 41466.415us 00:10:38.906 00:10:38.906 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:38.906 ================================================================================= 00:10:38.906 1.00000% : 10187.869us 00:10:38.906 10.00000% : 10724.073us 00:10:38.906 25.00000% : 11021.964us 00:10:38.906 50.00000% : 11439.011us 00:10:38.906 75.00000% : 12392.262us 00:10:38.906 90.00000% : 14834.967us 00:10:38.906 95.00000% : 17039.360us 00:10:38.906 98.00000% : 23473.804us 00:10:38.906 99.00000% : 29193.309us 00:10:38.906 99.50000% : 37653.411us 00:10:38.906 99.90000% : 38844.975us 00:10:38.906 99.99000% : 39321.600us 00:10:38.906 99.99900% : 39321.600us 00:10:38.906 99.99990% : 39321.600us 00:10:38.906 99.99999% : 39321.600us 00:10:38.906 00:10:38.906 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:38.906 ================================================================================= 00:10:38.906 1.00000% : 10307.025us 00:10:38.906 10.00000% : 10724.073us 00:10:38.906 25.00000% : 11021.964us 00:10:38.906 50.00000% : 11439.011us 00:10:38.906 75.00000% : 12273.105us 00:10:38.906 90.00000% : 14834.967us 00:10:38.906 95.00000% : 17039.360us 00:10:38.906 98.00000% : 23473.804us 00:10:38.906 99.00000% : 27048.495us 00:10:38.906 99.50000% : 35508.596us 00:10:38.906 99.90000% : 36938.473us 00:10:38.906 99.99000% : 37176.785us 00:10:38.906 99.99900% : 37176.785us 00:10:38.906 99.99990% : 37176.785us 00:10:38.906 99.99999% : 37176.785us 00:10:38.906 00:10:38.906 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:38.906 ================================================================================= 00:10:38.906 1.00000% : 10247.447us 00:10:38.906 10.00000% : 10724.073us 00:10:38.906 25.00000% : 11021.964us 00:10:38.906 50.00000% : 11439.011us 00:10:38.906 75.00000% : 12392.262us 00:10:38.906 90.00000% : 14715.811us 00:10:38.906 95.00000% : 17158.516us 00:10:38.906 98.00000% : 22520.553us 00:10:38.906 99.00000% : 23712.116us 00:10:38.906 99.50000% : 25976.087us 00:10:38.906 99.90000% : 27405.964us 00:10:38.906 99.99000% : 27644.276us 00:10:38.906 99.99900% : 27644.276us 00:10:38.906 99.99990% : 27644.276us 00:10:38.906 99.99999% : 27644.276us 00:10:38.906 00:10:38.906 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:38.906 ============================================================================== 00:10:38.906 Range in us Cumulative IO count 00:10:38.906 9711.244 - 9770.822: 0.1844% ( 19) 00:10:38.906 9770.822 - 9830.400: 0.2135% ( 3) 00:10:38.906 9830.400 - 9889.978: 0.2717% ( 6) 00:10:38.906 9889.978 - 9949.556: 0.5047% ( 24) 00:10:38.906 9949.556 - 10009.135: 0.5920% ( 9) 00:10:38.906 10009.135 - 10068.713: 0.9220% ( 34) 00:10:38.906 10068.713 - 10128.291: 1.5431% ( 64) 00:10:38.906 10128.291 - 10187.869: 2.1157% ( 59) 00:10:38.906 10187.869 - 10247.447: 3.0959% ( 101) 00:10:38.906 10247.447 - 10307.025: 3.6491% ( 57) 00:10:38.906 10307.025 - 10366.604: 4.7457% ( 113) 00:10:38.906 10366.604 - 10426.182: 6.1530% ( 145) 00:10:38.906 10426.182 - 10485.760: 7.4049% ( 129) 00:10:38.906 10485.760 - 10545.338: 8.8703% ( 151) 00:10:38.906 10545.338 - 10604.916: 10.7240% ( 191) 00:10:38.906 10604.916 - 10664.495: 13.1793% ( 253) 00:10:38.906 10664.495 - 10724.073: 15.4697% ( 236) 00:10:38.906 10724.073 - 10783.651: 18.6432% ( 327) 00:10:38.906 10783.651 - 10843.229: 21.8168% ( 327) 00:10:38.906 10843.229 - 10902.807: 24.8932% ( 317) 00:10:38.907 10902.807 - 10962.385: 27.7562% ( 295) 00:10:38.907 10962.385 - 11021.964: 31.0850% ( 343) 00:10:38.907 11021.964 - 11081.542: 33.6665% ( 266) 00:10:38.907 11081.542 - 11141.120: 36.3451% ( 276) 00:10:38.907 11141.120 - 11200.698: 38.7519% ( 248) 00:10:38.907 11200.698 - 11260.276: 41.3141% ( 264) 00:10:38.907 11260.276 - 11319.855: 43.8373% ( 260) 00:10:38.907 11319.855 - 11379.433: 46.2539% ( 249) 00:10:38.907 11379.433 - 11439.011: 48.4569% ( 227) 00:10:38.907 11439.011 - 11498.589: 50.8055% ( 242) 00:10:38.907 11498.589 - 11558.167: 53.3385% ( 261) 00:10:38.907 11558.167 - 11617.745: 55.5027% ( 223) 00:10:38.907 11617.745 - 11677.324: 57.9872% ( 256) 00:10:38.907 11677.324 - 11736.902: 60.0543% ( 213) 00:10:38.907 11736.902 - 11796.480: 62.5000% ( 252) 00:10:38.907 11796.480 - 11856.058: 64.2566% ( 181) 00:10:38.907 11856.058 - 11915.636: 66.0520% ( 185) 00:10:38.907 11915.636 - 11975.215: 67.7213% ( 172) 00:10:38.907 11975.215 - 12034.793: 69.0994% ( 142) 00:10:38.907 12034.793 - 12094.371: 70.4095% ( 135) 00:10:38.907 12094.371 - 12153.949: 71.4286% ( 105) 00:10:38.907 12153.949 - 12213.527: 72.4573% ( 106) 00:10:38.907 12213.527 - 12273.105: 73.2531% ( 82) 00:10:38.907 12273.105 - 12332.684: 74.2527% ( 103) 00:10:38.907 12332.684 - 12392.262: 74.8641% ( 63) 00:10:38.907 12392.262 - 12451.840: 75.4658% ( 62) 00:10:38.907 12451.840 - 12511.418: 75.9996% ( 55) 00:10:38.907 12511.418 - 12570.996: 76.4266% ( 44) 00:10:38.907 12570.996 - 12630.575: 76.8925% ( 48) 00:10:38.907 12630.575 - 12690.153: 77.1642% ( 28) 00:10:38.907 12690.153 - 12749.731: 77.5621% ( 41) 00:10:38.907 12749.731 - 12809.309: 77.9115% ( 36) 00:10:38.907 12809.309 - 12868.887: 78.3967% ( 50) 00:10:38.907 12868.887 - 12928.465: 78.7267% ( 34) 00:10:38.907 12928.465 - 12988.044: 79.0179% ( 30) 00:10:38.907 12988.044 - 13047.622: 79.3284% ( 32) 00:10:38.907 13047.622 - 13107.200: 79.6584% ( 34) 00:10:38.907 13107.200 - 13166.778: 79.9301% ( 28) 00:10:38.907 13166.778 - 13226.356: 80.1922% ( 27) 00:10:38.907 13226.356 - 13285.935: 80.5221% ( 34) 00:10:38.907 13285.935 - 13345.513: 80.7356% ( 22) 00:10:38.907 13345.513 - 13405.091: 80.9783% ( 25) 00:10:38.907 13405.091 - 13464.669: 81.1724% ( 20) 00:10:38.907 13464.669 - 13524.247: 81.6382% ( 48) 00:10:38.907 13524.247 - 13583.825: 81.9391% ( 31) 00:10:38.907 13583.825 - 13643.404: 82.2884% ( 36) 00:10:38.907 13643.404 - 13702.982: 82.9872% ( 72) 00:10:38.907 13702.982 - 13762.560: 83.2880% ( 31) 00:10:38.907 13762.560 - 13822.138: 83.5598% ( 28) 00:10:38.907 13822.138 - 13881.716: 83.9092% ( 36) 00:10:38.907 13881.716 - 13941.295: 84.3168% ( 42) 00:10:38.907 13941.295 - 14000.873: 84.7147% ( 41) 00:10:38.907 14000.873 - 14060.451: 85.1902% ( 49) 00:10:38.907 14060.451 - 14120.029: 85.5008% ( 32) 00:10:38.907 14120.029 - 14179.607: 85.9375% ( 45) 00:10:38.907 14179.607 - 14239.185: 86.2869% ( 36) 00:10:38.907 14239.185 - 14298.764: 86.7042% ( 43) 00:10:38.907 14298.764 - 14358.342: 87.0536% ( 36) 00:10:38.907 14358.342 - 14417.920: 87.4418% ( 40) 00:10:38.907 14417.920 - 14477.498: 87.8203% ( 39) 00:10:38.907 14477.498 - 14537.076: 88.1405% ( 33) 00:10:38.907 14537.076 - 14596.655: 88.5481% ( 42) 00:10:38.907 14596.655 - 14656.233: 88.8781% ( 34) 00:10:38.907 14656.233 - 14715.811: 89.3828% ( 52) 00:10:38.907 14715.811 - 14775.389: 89.6933% ( 32) 00:10:38.907 14775.389 - 14834.967: 89.9262% ( 24) 00:10:38.907 14834.967 - 14894.545: 90.2077% ( 29) 00:10:38.907 14894.545 - 14954.124: 90.4891% ( 29) 00:10:38.907 14954.124 - 15013.702: 90.7415% ( 26) 00:10:38.907 15013.702 - 15073.280: 91.0035% ( 27) 00:10:38.907 15073.280 - 15132.858: 91.1588% ( 16) 00:10:38.907 15132.858 - 15192.436: 91.4208% ( 27) 00:10:38.907 15192.436 - 15252.015: 91.7799% ( 37) 00:10:38.907 15252.015 - 15371.171: 92.1002% ( 33) 00:10:38.907 15371.171 - 15490.327: 92.6533% ( 57) 00:10:38.907 15490.327 - 15609.484: 93.0027% ( 36) 00:10:38.907 15609.484 - 15728.640: 93.3133% ( 32) 00:10:38.907 15728.640 - 15847.796: 93.4686% ( 16) 00:10:38.907 15847.796 - 15966.953: 93.6238% ( 16) 00:10:38.907 15966.953 - 16086.109: 93.7888% ( 17) 00:10:38.907 16086.109 - 16205.265: 93.9150% ( 13) 00:10:38.907 16205.265 - 16324.422: 94.2061% ( 30) 00:10:38.907 16324.422 - 16443.578: 94.4196% ( 22) 00:10:38.907 16443.578 - 16562.735: 94.5846% ( 17) 00:10:38.907 16562.735 - 16681.891: 94.7593% ( 18) 00:10:38.907 16681.891 - 16801.047: 94.8370% ( 8) 00:10:38.907 16801.047 - 16920.204: 94.9437% ( 11) 00:10:38.907 16920.204 - 17039.360: 95.1378% ( 20) 00:10:38.907 17039.360 - 17158.516: 95.2543% ( 12) 00:10:38.907 17158.516 - 17277.673: 95.3707% ( 12) 00:10:38.907 17277.673 - 17396.829: 95.4872% ( 12) 00:10:38.907 17396.829 - 17515.985: 95.5939% ( 11) 00:10:38.907 17515.985 - 17635.142: 95.6813% ( 9) 00:10:38.907 17635.142 - 17754.298: 95.7395% ( 6) 00:10:38.907 17754.298 - 17873.455: 95.7880% ( 5) 00:10:38.907 17873.455 - 17992.611: 95.8754% ( 9) 00:10:38.907 17992.611 - 18111.767: 95.9530% ( 8) 00:10:38.907 18111.767 - 18230.924: 96.0210% ( 7) 00:10:38.907 18230.924 - 18350.080: 96.0792% ( 6) 00:10:38.907 18350.080 - 18469.236: 96.1762% ( 10) 00:10:38.907 18469.236 - 18588.393: 96.2151% ( 4) 00:10:38.907 18588.393 - 18707.549: 96.2442% ( 3) 00:10:38.907 18707.549 - 18826.705: 96.2733% ( 3) 00:10:38.907 21567.302 - 21686.458: 96.3121% ( 4) 00:10:38.907 21686.458 - 21805.615: 96.4189% ( 11) 00:10:38.907 21805.615 - 21924.771: 96.4674% ( 5) 00:10:38.907 21924.771 - 22043.927: 96.4965% ( 3) 00:10:38.907 22043.927 - 22163.084: 96.7391% ( 25) 00:10:38.907 22163.084 - 22282.240: 96.8362% ( 10) 00:10:38.907 22282.240 - 22401.396: 96.9041% ( 7) 00:10:38.907 22401.396 - 22520.553: 97.1856% ( 29) 00:10:38.907 22520.553 - 22639.709: 97.3214% ( 14) 00:10:38.907 22639.709 - 22758.865: 97.4282% ( 11) 00:10:38.907 22758.865 - 22878.022: 97.5252% ( 10) 00:10:38.907 22878.022 - 22997.178: 97.6514% ( 13) 00:10:38.907 22997.178 - 23116.335: 97.7873% ( 14) 00:10:38.907 23116.335 - 23235.491: 97.9037% ( 12) 00:10:38.907 23235.491 - 23354.647: 98.1658% ( 27) 00:10:38.907 23354.647 - 23473.804: 98.3016% ( 14) 00:10:38.907 23473.804 - 23592.960: 98.3987% ( 10) 00:10:38.907 23592.960 - 23712.116: 98.4569% ( 6) 00:10:38.907 23712.116 - 23831.273: 98.5540% ( 10) 00:10:38.907 23831.273 - 23950.429: 98.5637% ( 1) 00:10:38.907 23950.429 - 24069.585: 98.6122% ( 5) 00:10:38.907 24069.585 - 24188.742: 98.6607% ( 5) 00:10:38.907 24188.742 - 24307.898: 98.7092% ( 5) 00:10:38.907 24307.898 - 24427.055: 98.7578% ( 5) 00:10:38.907 33840.407 - 34078.720: 98.8354% ( 8) 00:10:38.907 34078.720 - 34317.033: 98.8936% ( 6) 00:10:38.907 34317.033 - 34555.345: 98.9616% ( 7) 00:10:38.907 34555.345 - 34793.658: 99.0295% ( 7) 00:10:38.907 34793.658 - 35031.971: 99.0974% ( 7) 00:10:38.907 35031.971 - 35270.284: 99.1557% ( 6) 00:10:38.907 35270.284 - 35508.596: 99.2236% ( 7) 00:10:38.907 35508.596 - 35746.909: 99.2818% ( 6) 00:10:38.907 35746.909 - 35985.222: 99.3595% ( 8) 00:10:38.907 35985.222 - 36223.535: 99.3789% ( 2) 00:10:38.907 42419.665 - 42657.978: 99.3983% ( 2) 00:10:38.907 42657.978 - 42896.291: 99.4565% ( 6) 00:10:38.907 42896.291 - 43134.604: 99.5245% ( 7) 00:10:38.907 43134.604 - 43372.916: 99.5924% ( 7) 00:10:38.907 43372.916 - 43611.229: 99.6506% ( 6) 00:10:38.907 43611.229 - 43849.542: 99.7089% ( 6) 00:10:38.907 43849.542 - 44087.855: 99.7865% ( 8) 00:10:38.907 44087.855 - 44326.167: 99.8544% ( 7) 00:10:38.907 44326.167 - 44564.480: 99.9127% ( 6) 00:10:38.907 44564.480 - 44802.793: 99.9806% ( 7) 00:10:38.907 44802.793 - 45041.105: 100.0000% ( 2) 00:10:38.907 00:10:38.907 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:38.907 ============================================================================== 00:10:38.907 Range in us Cumulative IO count 00:10:38.907 9711.244 - 9770.822: 0.0097% ( 1) 00:10:38.907 9889.978 - 9949.556: 0.0485% ( 4) 00:10:38.907 9949.556 - 10009.135: 0.1262% ( 8) 00:10:38.907 10009.135 - 10068.713: 0.3106% ( 19) 00:10:38.907 10068.713 - 10128.291: 0.5338% ( 23) 00:10:38.907 10128.291 - 10187.869: 0.8832% ( 36) 00:10:38.907 10187.869 - 10247.447: 1.2519% ( 38) 00:10:38.907 10247.447 - 10307.025: 1.6984% ( 46) 00:10:38.907 10307.025 - 10366.604: 2.3583% ( 68) 00:10:38.907 10366.604 - 10426.182: 3.0182% ( 68) 00:10:38.907 10426.182 - 10485.760: 4.0858% ( 110) 00:10:38.907 10485.760 - 10545.338: 5.1145% ( 106) 00:10:38.907 10545.338 - 10604.916: 6.3179% ( 124) 00:10:38.907 10604.916 - 10664.495: 8.0454% ( 178) 00:10:38.907 10664.495 - 10724.073: 9.9767% ( 199) 00:10:38.907 10724.073 - 10783.651: 12.1409% ( 223) 00:10:38.907 10783.651 - 10843.229: 14.9845% ( 293) 00:10:38.907 10843.229 - 10902.807: 17.8086% ( 291) 00:10:38.907 10902.807 - 10962.385: 21.1762% ( 347) 00:10:38.907 10962.385 - 11021.964: 25.2911% ( 424) 00:10:38.907 11021.964 - 11081.542: 29.3478% ( 418) 00:10:38.907 11081.542 - 11141.120: 33.6471% ( 443) 00:10:38.907 11141.120 - 11200.698: 37.8882% ( 437) 00:10:38.907 11200.698 - 11260.276: 42.4786% ( 473) 00:10:38.908 11260.276 - 11319.855: 46.4189% ( 406) 00:10:38.908 11319.855 - 11379.433: 50.1747% ( 387) 00:10:38.908 11379.433 - 11439.011: 53.8335% ( 377) 00:10:38.908 11439.011 - 11498.589: 56.9196% ( 318) 00:10:38.908 11498.589 - 11558.167: 59.5303% ( 269) 00:10:38.908 11558.167 - 11617.745: 61.7430% ( 228) 00:10:38.908 11617.745 - 11677.324: 63.4511% ( 176) 00:10:38.908 11677.324 - 11736.902: 65.1203% ( 172) 00:10:38.908 11736.902 - 11796.480: 66.6052% ( 153) 00:10:38.908 11796.480 - 11856.058: 68.1289% ( 157) 00:10:38.908 11856.058 - 11915.636: 69.3905% ( 130) 00:10:38.908 11915.636 - 11975.215: 70.4872% ( 113) 00:10:38.908 11975.215 - 12034.793: 71.5062% ( 105) 00:10:38.908 12034.793 - 12094.371: 72.4379% ( 96) 00:10:38.908 12094.371 - 12153.949: 73.1755% ( 76) 00:10:38.908 12153.949 - 12213.527: 74.0101% ( 86) 00:10:38.908 12213.527 - 12273.105: 74.7865% ( 80) 00:10:38.908 12273.105 - 12332.684: 75.4561% ( 69) 00:10:38.908 12332.684 - 12392.262: 75.9608% ( 52) 00:10:38.908 12392.262 - 12451.840: 76.5043% ( 56) 00:10:38.908 12451.840 - 12511.418: 76.8439% ( 35) 00:10:38.908 12511.418 - 12570.996: 77.1060% ( 27) 00:10:38.908 12570.996 - 12630.575: 77.3777% ( 28) 00:10:38.908 12630.575 - 12690.153: 77.6203% ( 25) 00:10:38.908 12690.153 - 12749.731: 77.8824% ( 27) 00:10:38.908 12749.731 - 12809.309: 78.2706% ( 40) 00:10:38.908 12809.309 - 12868.887: 78.5132% ( 25) 00:10:38.908 12868.887 - 12928.465: 78.8043% ( 30) 00:10:38.908 12928.465 - 12988.044: 79.1343% ( 34) 00:10:38.908 12988.044 - 13047.622: 79.3381% ( 21) 00:10:38.908 13047.622 - 13107.200: 79.5322% ( 20) 00:10:38.908 13107.200 - 13166.778: 79.6972% ( 17) 00:10:38.908 13166.778 - 13226.356: 79.9010% ( 21) 00:10:38.908 13226.356 - 13285.935: 80.0757% ( 18) 00:10:38.908 13285.935 - 13345.513: 80.3474% ( 28) 00:10:38.908 13345.513 - 13405.091: 80.6871% ( 35) 00:10:38.908 13405.091 - 13464.669: 80.9783% ( 30) 00:10:38.908 13464.669 - 13524.247: 81.2209% ( 25) 00:10:38.908 13524.247 - 13583.825: 81.5120% ( 30) 00:10:38.908 13583.825 - 13643.404: 81.9488% ( 45) 00:10:38.908 13643.404 - 13702.982: 82.3952% ( 46) 00:10:38.908 13702.982 - 13762.560: 82.7155% ( 33) 00:10:38.908 13762.560 - 13822.138: 83.0939% ( 39) 00:10:38.908 13822.138 - 13881.716: 83.5501% ( 47) 00:10:38.908 13881.716 - 13941.295: 83.9868% ( 45) 00:10:38.908 13941.295 - 14000.873: 84.4915% ( 52) 00:10:38.908 14000.873 - 14060.451: 85.1029% ( 63) 00:10:38.908 14060.451 - 14120.029: 85.5008% ( 41) 00:10:38.908 14120.029 - 14179.607: 85.9763% ( 49) 00:10:38.908 14179.607 - 14239.185: 86.4227% ( 46) 00:10:38.908 14239.185 - 14298.764: 86.8401% ( 43) 00:10:38.908 14298.764 - 14358.342: 87.2283% ( 40) 00:10:38.908 14358.342 - 14417.920: 87.6844% ( 47) 00:10:38.908 14417.920 - 14477.498: 88.1891% ( 52) 00:10:38.908 14477.498 - 14537.076: 88.6937% ( 52) 00:10:38.908 14537.076 - 14596.655: 89.1984% ( 52) 00:10:38.908 14596.655 - 14656.233: 89.6254% ( 44) 00:10:38.908 14656.233 - 14715.811: 90.0815% ( 47) 00:10:38.908 14715.811 - 14775.389: 90.4309% ( 36) 00:10:38.908 14775.389 - 14834.967: 90.8579% ( 44) 00:10:38.908 14834.967 - 14894.545: 91.1005% ( 25) 00:10:38.908 14894.545 - 14954.124: 91.3820% ( 29) 00:10:38.908 14954.124 - 15013.702: 91.5664% ( 19) 00:10:38.908 15013.702 - 15073.280: 91.7508% ( 19) 00:10:38.908 15073.280 - 15132.858: 91.8672% ( 12) 00:10:38.908 15132.858 - 15192.436: 92.0031% ( 14) 00:10:38.908 15192.436 - 15252.015: 92.1099% ( 11) 00:10:38.908 15252.015 - 15371.171: 92.3234% ( 22) 00:10:38.908 15371.171 - 15490.327: 92.5466% ( 23) 00:10:38.908 15490.327 - 15609.484: 92.7407% ( 20) 00:10:38.908 15609.484 - 15728.640: 92.9445% ( 21) 00:10:38.908 15728.640 - 15847.796: 93.2356% ( 30) 00:10:38.908 15847.796 - 15966.953: 93.5559% ( 33) 00:10:38.908 15966.953 - 16086.109: 93.7985% ( 25) 00:10:38.908 16086.109 - 16205.265: 93.9732% ( 18) 00:10:38.908 16205.265 - 16324.422: 94.1382% ( 17) 00:10:38.908 16324.422 - 16443.578: 94.3517% ( 22) 00:10:38.908 16443.578 - 16562.735: 94.5555% ( 21) 00:10:38.908 16562.735 - 16681.891: 94.8175% ( 27) 00:10:38.908 16681.891 - 16801.047: 94.8952% ( 8) 00:10:38.908 16801.047 - 16920.204: 94.9534% ( 6) 00:10:38.908 16920.204 - 17039.360: 95.0116% ( 6) 00:10:38.908 17039.360 - 17158.516: 95.0505% ( 4) 00:10:38.908 17158.516 - 17277.673: 95.1766% ( 13) 00:10:38.908 17277.673 - 17396.829: 95.3222% ( 15) 00:10:38.908 17396.829 - 17515.985: 95.4581% ( 14) 00:10:38.908 17515.985 - 17635.142: 95.6522% ( 20) 00:10:38.908 17635.142 - 17754.298: 95.8269% ( 18) 00:10:38.908 17754.298 - 17873.455: 95.9821% ( 16) 00:10:38.908 17873.455 - 17992.611: 96.0986% ( 12) 00:10:38.908 17992.611 - 18111.767: 96.1568% ( 6) 00:10:38.908 18111.767 - 18230.924: 96.2345% ( 8) 00:10:38.908 18230.924 - 18350.080: 96.2733% ( 4) 00:10:38.908 21924.771 - 22043.927: 96.3703% ( 10) 00:10:38.908 22043.927 - 22163.084: 96.4383% ( 7) 00:10:38.908 22163.084 - 22282.240: 96.5353% ( 10) 00:10:38.908 22282.240 - 22401.396: 96.7197% ( 19) 00:10:38.908 22401.396 - 22520.553: 96.8653% ( 15) 00:10:38.908 22520.553 - 22639.709: 96.9818% ( 12) 00:10:38.908 22639.709 - 22758.865: 97.1564% ( 18) 00:10:38.908 22758.865 - 22878.022: 97.3797% ( 23) 00:10:38.908 22878.022 - 22997.178: 97.6126% ( 24) 00:10:38.908 22997.178 - 23116.335: 97.8164% ( 21) 00:10:38.908 23116.335 - 23235.491: 97.9717% ( 16) 00:10:38.908 23235.491 - 23354.647: 98.0881% ( 12) 00:10:38.908 23354.647 - 23473.804: 98.2531% ( 17) 00:10:38.908 23473.804 - 23592.960: 98.3307% ( 8) 00:10:38.908 23592.960 - 23712.116: 98.4278% ( 10) 00:10:38.908 23712.116 - 23831.273: 98.5248% ( 10) 00:10:38.908 23831.273 - 23950.429: 98.5734% ( 5) 00:10:38.908 23950.429 - 24069.585: 98.6219% ( 5) 00:10:38.908 24069.585 - 24188.742: 98.6704% ( 5) 00:10:38.908 24188.742 - 24307.898: 98.7189% ( 5) 00:10:38.908 24307.898 - 24427.055: 98.7578% ( 4) 00:10:38.908 31695.593 - 31933.905: 98.7675% ( 1) 00:10:38.908 31933.905 - 32172.218: 98.8354% ( 7) 00:10:38.908 32172.218 - 32410.531: 98.9033% ( 7) 00:10:38.908 32410.531 - 32648.844: 98.9713% ( 7) 00:10:38.908 32648.844 - 32887.156: 99.0489% ( 8) 00:10:38.908 32887.156 - 33125.469: 99.1266% ( 8) 00:10:38.908 33125.469 - 33363.782: 99.1945% ( 7) 00:10:38.908 33363.782 - 33602.095: 99.2624% ( 7) 00:10:38.908 33602.095 - 33840.407: 99.3401% ( 8) 00:10:38.908 33840.407 - 34078.720: 99.3789% ( 4) 00:10:38.908 40274.851 - 40513.164: 99.3886% ( 1) 00:10:38.908 40513.164 - 40751.476: 99.4565% ( 7) 00:10:38.908 40751.476 - 40989.789: 99.5342% ( 8) 00:10:38.908 40989.789 - 41228.102: 99.6021% ( 7) 00:10:38.908 41228.102 - 41466.415: 99.6797% ( 8) 00:10:38.908 41466.415 - 41704.727: 99.7477% ( 7) 00:10:38.908 41704.727 - 41943.040: 99.8156% ( 7) 00:10:38.908 41943.040 - 42181.353: 99.8932% ( 8) 00:10:38.908 42181.353 - 42419.665: 99.9612% ( 7) 00:10:38.908 42419.665 - 42657.978: 100.0000% ( 4) 00:10:38.908 00:10:38.908 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:38.908 ============================================================================== 00:10:38.908 Range in us Cumulative IO count 00:10:38.908 9949.556 - 10009.135: 0.0582% ( 6) 00:10:38.908 10009.135 - 10068.713: 0.1553% ( 10) 00:10:38.908 10068.713 - 10128.291: 0.3494% ( 20) 00:10:38.908 10128.291 - 10187.869: 0.6017% ( 26) 00:10:38.908 10187.869 - 10247.447: 1.1743% ( 59) 00:10:38.908 10247.447 - 10307.025: 1.8439% ( 69) 00:10:38.908 10307.025 - 10366.604: 2.6980% ( 88) 00:10:38.908 10366.604 - 10426.182: 3.6879% ( 102) 00:10:38.908 10426.182 - 10485.760: 4.8234% ( 117) 00:10:38.908 10485.760 - 10545.338: 6.2597% ( 148) 00:10:38.908 10545.338 - 10604.916: 7.8998% ( 169) 00:10:38.908 10604.916 - 10664.495: 9.8117% ( 197) 00:10:38.908 10664.495 - 10724.073: 11.8692% ( 212) 00:10:38.908 10724.073 - 10783.651: 14.2372% ( 244) 00:10:38.908 10783.651 - 10843.229: 17.2263% ( 308) 00:10:38.908 10843.229 - 10902.807: 20.5745% ( 345) 00:10:38.908 10902.807 - 10962.385: 23.8839% ( 341) 00:10:38.908 10962.385 - 11021.964: 26.9798% ( 319) 00:10:38.908 11021.964 - 11081.542: 30.4445% ( 357) 00:10:38.908 11081.542 - 11141.120: 34.1421% ( 381) 00:10:38.908 11141.120 - 11200.698: 37.7911% ( 376) 00:10:38.908 11200.698 - 11260.276: 41.3432% ( 366) 00:10:38.908 11260.276 - 11319.855: 45.2737% ( 405) 00:10:38.908 11319.855 - 11379.433: 49.1460% ( 399) 00:10:38.908 11379.433 - 11439.011: 52.6786% ( 364) 00:10:38.908 11439.011 - 11498.589: 55.7162% ( 313) 00:10:38.908 11498.589 - 11558.167: 58.6277% ( 300) 00:10:38.908 11558.167 - 11617.745: 61.0637% ( 251) 00:10:38.908 11617.745 - 11677.324: 63.4317% ( 244) 00:10:38.908 11677.324 - 11736.902: 65.4794% ( 211) 00:10:38.908 11736.902 - 11796.480: 66.9061% ( 147) 00:10:38.908 11796.480 - 11856.058: 68.2259% ( 136) 00:10:38.908 11856.058 - 11915.636: 69.3129% ( 112) 00:10:38.908 11915.636 - 11975.215: 70.3319% ( 105) 00:10:38.908 11975.215 - 12034.793: 71.3121% ( 101) 00:10:38.908 12034.793 - 12094.371: 72.1856% ( 90) 00:10:38.908 12094.371 - 12153.949: 72.8940% ( 73) 00:10:38.908 12153.949 - 12213.527: 73.5734% ( 70) 00:10:38.908 12213.527 - 12273.105: 74.2818% ( 73) 00:10:38.908 12273.105 - 12332.684: 74.9903% ( 73) 00:10:38.908 12332.684 - 12392.262: 75.5629% ( 59) 00:10:38.908 12392.262 - 12451.840: 76.0773% ( 53) 00:10:38.908 12451.840 - 12511.418: 76.4557% ( 39) 00:10:38.908 12511.418 - 12570.996: 76.7954% ( 35) 00:10:38.909 12570.996 - 12630.575: 77.1060% ( 32) 00:10:38.909 12630.575 - 12690.153: 77.4845% ( 39) 00:10:38.909 12690.153 - 12749.731: 77.8047% ( 33) 00:10:38.909 12749.731 - 12809.309: 78.0571% ( 26) 00:10:38.909 12809.309 - 12868.887: 78.2706% ( 22) 00:10:38.909 12868.887 - 12928.465: 78.5132% ( 25) 00:10:38.909 12928.465 - 12988.044: 78.7461% ( 24) 00:10:38.909 12988.044 - 13047.622: 78.9887% ( 25) 00:10:38.909 13047.622 - 13107.200: 79.2702% ( 29) 00:10:38.909 13107.200 - 13166.778: 79.4643% ( 20) 00:10:38.909 13166.778 - 13226.356: 79.7069% ( 25) 00:10:38.909 13226.356 - 13285.935: 80.0757% ( 38) 00:10:38.909 13285.935 - 13345.513: 80.4930% ( 43) 00:10:38.909 13345.513 - 13405.091: 80.8521% ( 37) 00:10:38.909 13405.091 - 13464.669: 81.4150% ( 58) 00:10:38.909 13464.669 - 13524.247: 81.9682% ( 57) 00:10:38.909 13524.247 - 13583.825: 82.3273% ( 37) 00:10:38.909 13583.825 - 13643.404: 82.7737% ( 46) 00:10:38.909 13643.404 - 13702.982: 83.0842% ( 32) 00:10:38.909 13702.982 - 13762.560: 83.4530% ( 38) 00:10:38.909 13762.560 - 13822.138: 83.8121% ( 37) 00:10:38.909 13822.138 - 13881.716: 84.2003% ( 40) 00:10:38.909 13881.716 - 13941.295: 84.7729% ( 59) 00:10:38.909 13941.295 - 14000.873: 85.1029% ( 34) 00:10:38.909 14000.873 - 14060.451: 85.5396% ( 45) 00:10:38.909 14060.451 - 14120.029: 85.9666% ( 44) 00:10:38.909 14120.029 - 14179.607: 86.2966% ( 34) 00:10:38.909 14179.607 - 14239.185: 86.6751% ( 39) 00:10:38.909 14239.185 - 14298.764: 87.0827% ( 42) 00:10:38.909 14298.764 - 14358.342: 87.5582% ( 49) 00:10:38.909 14358.342 - 14417.920: 88.0435% ( 50) 00:10:38.909 14417.920 - 14477.498: 88.4414% ( 41) 00:10:38.909 14477.498 - 14537.076: 88.8490% ( 42) 00:10:38.909 14537.076 - 14596.655: 89.2857% ( 45) 00:10:38.909 14596.655 - 14656.233: 89.7904% ( 52) 00:10:38.909 14656.233 - 14715.811: 90.2174% ( 44) 00:10:38.909 14715.811 - 14775.389: 90.5182% ( 31) 00:10:38.909 14775.389 - 14834.967: 90.7609% ( 25) 00:10:38.909 14834.967 - 14894.545: 91.0714% ( 32) 00:10:38.909 14894.545 - 14954.124: 91.3626% ( 30) 00:10:38.909 14954.124 - 15013.702: 91.5955% ( 24) 00:10:38.909 15013.702 - 15073.280: 91.7799% ( 19) 00:10:38.909 15073.280 - 15132.858: 91.9546% ( 18) 00:10:38.909 15132.858 - 15192.436: 92.0807% ( 13) 00:10:38.909 15192.436 - 15252.015: 92.2069% ( 13) 00:10:38.909 15252.015 - 15371.171: 92.3234% ( 12) 00:10:38.909 15371.171 - 15490.327: 92.4107% ( 9) 00:10:38.909 15490.327 - 15609.484: 92.5951% ( 19) 00:10:38.909 15609.484 - 15728.640: 92.8668% ( 28) 00:10:38.909 15728.640 - 15847.796: 93.1289% ( 27) 00:10:38.909 15847.796 - 15966.953: 93.4297% ( 31) 00:10:38.909 15966.953 - 16086.109: 93.6335% ( 21) 00:10:38.909 16086.109 - 16205.265: 93.8082% ( 18) 00:10:38.909 16205.265 - 16324.422: 94.0314% ( 23) 00:10:38.909 16324.422 - 16443.578: 94.2255% ( 20) 00:10:38.909 16443.578 - 16562.735: 94.3905% ( 17) 00:10:38.909 16562.735 - 16681.891: 94.5749% ( 19) 00:10:38.909 16681.891 - 16801.047: 94.7981% ( 23) 00:10:38.909 16801.047 - 16920.204: 94.9728% ( 18) 00:10:38.909 16920.204 - 17039.360: 95.0796% ( 11) 00:10:38.909 17039.360 - 17158.516: 95.2931% ( 22) 00:10:38.909 17158.516 - 17277.673: 95.4581% ( 17) 00:10:38.909 17277.673 - 17396.829: 95.5939% ( 14) 00:10:38.909 17396.829 - 17515.985: 95.6910% ( 10) 00:10:38.909 17515.985 - 17635.142: 95.8172% ( 13) 00:10:38.909 17635.142 - 17754.298: 95.9336% ( 12) 00:10:38.909 17754.298 - 17873.455: 96.0404% ( 11) 00:10:38.909 17873.455 - 17992.611: 96.1277% ( 9) 00:10:38.909 17992.611 - 18111.767: 96.1665% ( 4) 00:10:38.909 18111.767 - 18230.924: 96.1957% ( 3) 00:10:38.909 18230.924 - 18350.080: 96.2345% ( 4) 00:10:38.909 18350.080 - 18469.236: 96.2636% ( 3) 00:10:38.909 18469.236 - 18588.393: 96.2733% ( 1) 00:10:38.909 21686.458 - 21805.615: 96.2927% ( 2) 00:10:38.909 21805.615 - 21924.771: 96.3800% ( 9) 00:10:38.909 21924.771 - 22043.927: 96.4771% ( 10) 00:10:38.909 22043.927 - 22163.084: 96.5644% ( 9) 00:10:38.909 22163.084 - 22282.240: 96.7391% ( 18) 00:10:38.909 22282.240 - 22401.396: 96.7974% ( 6) 00:10:38.909 22401.396 - 22520.553: 96.8362% ( 4) 00:10:38.909 22520.553 - 22639.709: 96.8847% ( 5) 00:10:38.909 22639.709 - 22758.865: 96.9720% ( 9) 00:10:38.909 22758.865 - 22878.022: 97.0594% ( 9) 00:10:38.909 22878.022 - 22997.178: 97.1661% ( 11) 00:10:38.909 22997.178 - 23116.335: 97.4185% ( 26) 00:10:38.909 23116.335 - 23235.491: 97.5835% ( 17) 00:10:38.909 23235.491 - 23354.647: 97.7193% ( 14) 00:10:38.909 23354.647 - 23473.804: 97.8940% ( 18) 00:10:38.909 23473.804 - 23592.960: 98.1852% ( 30) 00:10:38.909 23592.960 - 23712.116: 98.3793% ( 20) 00:10:38.909 23712.116 - 23831.273: 98.4666% ( 9) 00:10:38.909 23831.273 - 23950.429: 98.5443% ( 8) 00:10:38.909 23950.429 - 24069.585: 98.6219% ( 8) 00:10:38.909 24069.585 - 24188.742: 98.6704% ( 5) 00:10:38.909 24188.742 - 24307.898: 98.7092% ( 4) 00:10:38.909 24307.898 - 24427.055: 98.7481% ( 4) 00:10:38.909 24427.055 - 24546.211: 98.7578% ( 1) 00:10:38.909 30384.873 - 30504.029: 98.7675% ( 1) 00:10:38.909 30504.029 - 30742.342: 98.8257% ( 6) 00:10:38.909 30742.342 - 30980.655: 98.8936% ( 7) 00:10:38.909 30980.655 - 31218.967: 98.9713% ( 8) 00:10:38.909 31218.967 - 31457.280: 99.0392% ( 7) 00:10:38.909 31457.280 - 31695.593: 99.1071% ( 7) 00:10:38.909 31695.593 - 31933.905: 99.1848% ( 8) 00:10:38.909 31933.905 - 32172.218: 99.2527% ( 7) 00:10:38.909 32172.218 - 32410.531: 99.3207% ( 7) 00:10:38.909 32410.531 - 32648.844: 99.3789% ( 6) 00:10:38.909 39083.287 - 39321.600: 99.3983% ( 2) 00:10:38.909 39321.600 - 39559.913: 99.4662% ( 7) 00:10:38.909 39559.913 - 39798.225: 99.5342% ( 7) 00:10:38.909 39798.225 - 40036.538: 99.6021% ( 7) 00:10:38.909 40036.538 - 40274.851: 99.6700% ( 7) 00:10:38.909 40274.851 - 40513.164: 99.7380% ( 7) 00:10:38.909 40513.164 - 40751.476: 99.8059% ( 7) 00:10:38.909 40751.476 - 40989.789: 99.8835% ( 8) 00:10:38.909 40989.789 - 41228.102: 99.9515% ( 7) 00:10:38.909 41228.102 - 41466.415: 100.0000% ( 5) 00:10:38.909 00:10:38.909 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:38.909 ============================================================================== 00:10:38.909 Range in us Cumulative IO count 00:10:38.909 9770.822 - 9830.400: 0.0097% ( 1) 00:10:38.909 9830.400 - 9889.978: 0.0291% ( 2) 00:10:38.909 9889.978 - 9949.556: 0.0679% ( 4) 00:10:38.909 9949.556 - 10009.135: 0.1456% ( 8) 00:10:38.909 10009.135 - 10068.713: 0.3397% ( 20) 00:10:38.909 10068.713 - 10128.291: 0.6211% ( 29) 00:10:38.909 10128.291 - 10187.869: 1.0870% ( 48) 00:10:38.909 10187.869 - 10247.447: 1.6013% ( 53) 00:10:38.909 10247.447 - 10307.025: 2.2030% ( 62) 00:10:38.909 10307.025 - 10366.604: 2.9988% ( 82) 00:10:38.909 10366.604 - 10426.182: 3.9305% ( 96) 00:10:38.909 10426.182 - 10485.760: 4.9010% ( 100) 00:10:38.909 10485.760 - 10545.338: 6.1141% ( 125) 00:10:38.909 10545.338 - 10604.916: 7.7446% ( 168) 00:10:38.909 10604.916 - 10664.495: 9.2877% ( 159) 00:10:38.909 10664.495 - 10724.073: 11.4810% ( 226) 00:10:38.909 10724.073 - 10783.651: 13.5578% ( 214) 00:10:38.909 10783.651 - 10843.229: 16.2073% ( 273) 00:10:38.909 10843.229 - 10902.807: 19.2935% ( 318) 00:10:38.909 10902.807 - 10962.385: 22.1564% ( 295) 00:10:38.909 10962.385 - 11021.964: 25.5726% ( 352) 00:10:38.909 11021.964 - 11081.542: 29.2120% ( 375) 00:10:38.909 11081.542 - 11141.120: 33.1036% ( 401) 00:10:38.909 11141.120 - 11200.698: 36.9468% ( 396) 00:10:38.909 11200.698 - 11260.276: 41.3432% ( 453) 00:10:38.909 11260.276 - 11319.855: 45.7492% ( 454) 00:10:38.909 11319.855 - 11379.433: 49.3983% ( 376) 00:10:38.909 11379.433 - 11439.011: 52.8921% ( 360) 00:10:38.909 11439.011 - 11498.589: 56.2791% ( 349) 00:10:38.909 11498.589 - 11558.167: 58.9189% ( 272) 00:10:38.909 11558.167 - 11617.745: 61.2286% ( 238) 00:10:38.909 11617.745 - 11677.324: 63.2085% ( 204) 00:10:38.909 11677.324 - 11736.902: 65.1495% ( 200) 00:10:38.909 11736.902 - 11796.480: 66.5470% ( 144) 00:10:38.909 11796.480 - 11856.058: 67.7116% ( 120) 00:10:38.909 11856.058 - 11915.636: 68.6044% ( 92) 00:10:38.909 11915.636 - 11975.215: 69.3323% ( 75) 00:10:38.909 11975.215 - 12034.793: 70.0699% ( 76) 00:10:38.909 12034.793 - 12094.371: 70.7492% ( 70) 00:10:38.909 12094.371 - 12153.949: 71.3800% ( 65) 00:10:38.909 12153.949 - 12213.527: 72.2050% ( 85) 00:10:38.909 12213.527 - 12273.105: 73.3793% ( 121) 00:10:38.909 12273.105 - 12332.684: 74.2721% ( 92) 00:10:38.909 12332.684 - 12392.262: 75.1068% ( 86) 00:10:38.909 12392.262 - 12451.840: 75.6793% ( 59) 00:10:38.909 12451.840 - 12511.418: 76.3587% ( 70) 00:10:38.909 12511.418 - 12570.996: 76.9119% ( 57) 00:10:38.909 12570.996 - 12630.575: 77.4262% ( 53) 00:10:38.909 12630.575 - 12690.153: 77.9697% ( 56) 00:10:38.909 12690.153 - 12749.731: 78.4550% ( 50) 00:10:38.909 12749.731 - 12809.309: 78.8917% ( 45) 00:10:38.909 12809.309 - 12868.887: 79.1440% ( 26) 00:10:38.909 12868.887 - 12928.465: 79.3381% ( 20) 00:10:38.909 12928.465 - 12988.044: 79.5031% ( 17) 00:10:38.909 12988.044 - 13047.622: 79.6972% ( 20) 00:10:38.909 13047.622 - 13107.200: 79.9495% ( 26) 00:10:38.909 13107.200 - 13166.778: 80.2116% ( 27) 00:10:38.909 13166.778 - 13226.356: 80.5124% ( 31) 00:10:38.909 13226.356 - 13285.935: 80.7745% ( 27) 00:10:38.909 13285.935 - 13345.513: 81.1821% ( 42) 00:10:38.909 13345.513 - 13405.091: 81.5411% ( 37) 00:10:38.909 13405.091 - 13464.669: 81.7547% ( 22) 00:10:38.910 13464.669 - 13524.247: 81.9196% ( 17) 00:10:38.910 13524.247 - 13583.825: 82.2108% ( 30) 00:10:38.910 13583.825 - 13643.404: 82.5505% ( 35) 00:10:38.910 13643.404 - 13702.982: 82.9387% ( 40) 00:10:38.910 13702.982 - 13762.560: 83.2686% ( 34) 00:10:38.910 13762.560 - 13822.138: 83.8121% ( 56) 00:10:38.910 13822.138 - 13881.716: 84.3071% ( 51) 00:10:38.910 13881.716 - 13941.295: 84.6759% ( 38) 00:10:38.910 13941.295 - 14000.873: 85.1611% ( 50) 00:10:38.910 14000.873 - 14060.451: 85.5978% ( 45) 00:10:38.910 14060.451 - 14120.029: 86.0151% ( 43) 00:10:38.910 14120.029 - 14179.607: 86.3645% ( 36) 00:10:38.910 14179.607 - 14239.185: 86.7333% ( 38) 00:10:38.910 14239.185 - 14298.764: 87.1700% ( 45) 00:10:38.910 14298.764 - 14358.342: 87.6262% ( 47) 00:10:38.910 14358.342 - 14417.920: 87.9755% ( 36) 00:10:38.910 14417.920 - 14477.498: 88.3249% ( 36) 00:10:38.910 14477.498 - 14537.076: 88.7131% ( 40) 00:10:38.910 14537.076 - 14596.655: 88.9460% ( 24) 00:10:38.910 14596.655 - 14656.233: 89.2760% ( 34) 00:10:38.910 14656.233 - 14715.811: 89.5575% ( 29) 00:10:38.910 14715.811 - 14775.389: 89.8195% ( 27) 00:10:38.910 14775.389 - 14834.967: 90.1495% ( 34) 00:10:38.910 14834.967 - 14894.545: 90.4891% ( 35) 00:10:38.910 14894.545 - 14954.124: 90.8482% ( 37) 00:10:38.910 14954.124 - 15013.702: 91.1588% ( 32) 00:10:38.910 15013.702 - 15073.280: 91.4499% ( 30) 00:10:38.910 15073.280 - 15132.858: 91.6634% ( 22) 00:10:38.910 15132.858 - 15192.436: 91.8187% ( 16) 00:10:38.910 15192.436 - 15252.015: 91.9643% ( 15) 00:10:38.910 15252.015 - 15371.171: 92.1099% ( 15) 00:10:38.910 15371.171 - 15490.327: 92.3040% ( 20) 00:10:38.910 15490.327 - 15609.484: 92.5078% ( 21) 00:10:38.910 15609.484 - 15728.640: 92.8086% ( 31) 00:10:38.910 15728.640 - 15847.796: 93.0998% ( 30) 00:10:38.910 15847.796 - 15966.953: 93.3036% ( 21) 00:10:38.910 15966.953 - 16086.109: 93.4491% ( 15) 00:10:38.910 16086.109 - 16205.265: 93.6238% ( 18) 00:10:38.910 16205.265 - 16324.422: 93.8568% ( 24) 00:10:38.910 16324.422 - 16443.578: 94.1285% ( 28) 00:10:38.910 16443.578 - 16562.735: 94.3614% ( 24) 00:10:38.910 16562.735 - 16681.891: 94.5264% ( 17) 00:10:38.910 16681.891 - 16801.047: 94.7690% ( 25) 00:10:38.910 16801.047 - 16920.204: 94.9340% ( 17) 00:10:38.910 16920.204 - 17039.360: 95.1184% ( 19) 00:10:38.910 17039.360 - 17158.516: 95.3901% ( 28) 00:10:38.910 17158.516 - 17277.673: 95.5551% ( 17) 00:10:38.910 17277.673 - 17396.829: 95.6619% ( 11) 00:10:38.910 17396.829 - 17515.985: 95.7783% ( 12) 00:10:38.910 17515.985 - 17635.142: 95.9045% ( 13) 00:10:38.910 17635.142 - 17754.298: 95.9821% ( 8) 00:10:38.910 17754.298 - 17873.455: 96.0792% ( 10) 00:10:38.910 17873.455 - 17992.611: 96.1374% ( 6) 00:10:38.910 17992.611 - 18111.767: 96.1665% ( 3) 00:10:38.910 18111.767 - 18230.924: 96.2054% ( 4) 00:10:38.910 18230.924 - 18350.080: 96.2442% ( 4) 00:10:38.910 18350.080 - 18469.236: 96.2733% ( 3) 00:10:38.910 21924.771 - 22043.927: 96.3995% ( 13) 00:10:38.910 22043.927 - 22163.084: 96.6033% ( 21) 00:10:38.910 22163.084 - 22282.240: 96.8750% ( 28) 00:10:38.910 22282.240 - 22401.396: 97.2147% ( 35) 00:10:38.910 22401.396 - 22520.553: 97.3408% ( 13) 00:10:38.910 22520.553 - 22639.709: 97.4379% ( 10) 00:10:38.910 22639.709 - 22758.865: 97.5738% ( 14) 00:10:38.910 22758.865 - 22878.022: 97.6417% ( 7) 00:10:38.910 22878.022 - 22997.178: 97.7096% ( 7) 00:10:38.910 22997.178 - 23116.335: 97.7679% ( 6) 00:10:38.910 23116.335 - 23235.491: 97.8358% ( 7) 00:10:38.910 23235.491 - 23354.647: 97.9814% ( 15) 00:10:38.910 23354.647 - 23473.804: 98.1075% ( 13) 00:10:38.910 23473.804 - 23592.960: 98.2434% ( 14) 00:10:38.910 23592.960 - 23712.116: 98.4569% ( 22) 00:10:38.910 23712.116 - 23831.273: 98.5831% ( 13) 00:10:38.910 23831.273 - 23950.429: 98.6801% ( 10) 00:10:38.910 23950.429 - 24069.585: 98.7286% ( 5) 00:10:38.910 24069.585 - 24188.742: 98.7578% ( 3) 00:10:38.910 28240.058 - 28359.215: 98.7675% ( 1) 00:10:38.910 28359.215 - 28478.371: 98.8063% ( 4) 00:10:38.910 28478.371 - 28597.527: 98.8451% ( 4) 00:10:38.910 28597.527 - 28716.684: 98.8742% ( 3) 00:10:38.910 28716.684 - 28835.840: 98.9130% ( 4) 00:10:38.910 28835.840 - 28954.996: 98.9422% ( 3) 00:10:38.910 28954.996 - 29074.153: 98.9810% ( 4) 00:10:38.910 29074.153 - 29193.309: 99.0198% ( 4) 00:10:38.910 29193.309 - 29312.465: 99.0586% ( 4) 00:10:38.910 29312.465 - 29431.622: 99.0877% ( 3) 00:10:38.910 29431.622 - 29550.778: 99.1266% ( 4) 00:10:38.910 29550.778 - 29669.935: 99.1557% ( 3) 00:10:38.910 29669.935 - 29789.091: 99.1945% ( 4) 00:10:38.910 29789.091 - 29908.247: 99.2333% ( 4) 00:10:38.910 29908.247 - 30027.404: 99.2721% ( 4) 00:10:38.910 30027.404 - 30146.560: 99.3012% ( 3) 00:10:38.910 30146.560 - 30265.716: 99.3304% ( 3) 00:10:38.910 30265.716 - 30384.873: 99.3692% ( 4) 00:10:38.910 30384.873 - 30504.029: 99.3789% ( 1) 00:10:38.910 36938.473 - 37176.785: 99.4080% ( 3) 00:10:38.910 37176.785 - 37415.098: 99.4759% ( 7) 00:10:38.910 37415.098 - 37653.411: 99.5536% ( 8) 00:10:38.910 37653.411 - 37891.724: 99.6312% ( 8) 00:10:38.910 37891.724 - 38130.036: 99.6991% ( 7) 00:10:38.910 38130.036 - 38368.349: 99.7671% ( 7) 00:10:38.910 38368.349 - 38606.662: 99.8447% ( 8) 00:10:38.910 38606.662 - 38844.975: 99.9127% ( 7) 00:10:38.910 38844.975 - 39083.287: 99.9806% ( 7) 00:10:38.910 39083.287 - 39321.600: 100.0000% ( 2) 00:10:38.910 00:10:38.910 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:38.910 ============================================================================== 00:10:38.910 Range in us Cumulative IO count 00:10:38.910 9889.978 - 9949.556: 0.0097% ( 1) 00:10:38.910 9949.556 - 10009.135: 0.0776% ( 7) 00:10:38.910 10009.135 - 10068.713: 0.1359% ( 6) 00:10:38.910 10068.713 - 10128.291: 0.2426% ( 11) 00:10:38.910 10128.291 - 10187.869: 0.4755% ( 24) 00:10:38.910 10187.869 - 10247.447: 0.8734% ( 41) 00:10:38.910 10247.447 - 10307.025: 1.3393% ( 48) 00:10:38.910 10307.025 - 10366.604: 2.1739% ( 86) 00:10:38.910 10366.604 - 10426.182: 3.2512% ( 111) 00:10:38.910 10426.182 - 10485.760: 4.3964% ( 118) 00:10:38.910 10485.760 - 10545.338: 5.7065% ( 135) 00:10:38.910 10545.338 - 10604.916: 7.3855% ( 173) 00:10:38.910 10604.916 - 10664.495: 9.3944% ( 207) 00:10:38.910 10664.495 - 10724.073: 11.5780% ( 225) 00:10:38.910 10724.073 - 10783.651: 13.7908% ( 228) 00:10:38.910 10783.651 - 10843.229: 16.4305% ( 272) 00:10:38.910 10843.229 - 10902.807: 19.1964% ( 285) 00:10:38.910 10902.807 - 10962.385: 22.4864% ( 339) 00:10:38.910 10962.385 - 11021.964: 26.2811% ( 391) 00:10:38.910 11021.964 - 11081.542: 29.8428% ( 367) 00:10:38.910 11081.542 - 11141.120: 33.5307% ( 380) 00:10:38.910 11141.120 - 11200.698: 37.6747% ( 427) 00:10:38.910 11200.698 - 11260.276: 41.6925% ( 414) 00:10:38.910 11260.276 - 11319.855: 46.2539% ( 470) 00:10:38.910 11319.855 - 11379.433: 49.7962% ( 365) 00:10:38.910 11379.433 - 11439.011: 53.3773% ( 369) 00:10:38.910 11439.011 - 11498.589: 56.1044% ( 281) 00:10:38.910 11498.589 - 11558.167: 58.5986% ( 257) 00:10:38.910 11558.167 - 11617.745: 60.8793% ( 235) 00:10:38.910 11617.745 - 11677.324: 62.8979% ( 208) 00:10:38.910 11677.324 - 11736.902: 64.5575% ( 171) 00:10:38.910 11736.902 - 11796.480: 65.8773% ( 136) 00:10:38.910 11796.480 - 11856.058: 67.1778% ( 134) 00:10:38.910 11856.058 - 11915.636: 68.5947% ( 146) 00:10:38.910 11915.636 - 11975.215: 69.9534% ( 140) 00:10:38.910 11975.215 - 12034.793: 71.0598% ( 114) 00:10:38.910 12034.793 - 12094.371: 72.1079% ( 108) 00:10:38.910 12094.371 - 12153.949: 73.0590% ( 98) 00:10:38.910 12153.949 - 12213.527: 74.2624% ( 124) 00:10:38.910 12213.527 - 12273.105: 75.2135% ( 98) 00:10:38.910 12273.105 - 12332.684: 75.8055% ( 61) 00:10:38.910 12332.684 - 12392.262: 76.2616% ( 47) 00:10:38.910 12392.262 - 12451.840: 76.7469% ( 50) 00:10:38.910 12451.840 - 12511.418: 77.1545% ( 42) 00:10:38.910 12511.418 - 12570.996: 77.5524% ( 41) 00:10:38.910 12570.996 - 12630.575: 77.8436% ( 30) 00:10:38.910 12630.575 - 12690.153: 78.2318% ( 40) 00:10:38.910 12690.153 - 12749.731: 78.4744% ( 25) 00:10:38.910 12749.731 - 12809.309: 78.6879% ( 22) 00:10:38.910 12809.309 - 12868.887: 78.8820% ( 20) 00:10:38.910 12868.887 - 12928.465: 79.1052% ( 23) 00:10:38.911 12928.465 - 12988.044: 79.3575% ( 26) 00:10:38.911 12988.044 - 13047.622: 79.6778% ( 33) 00:10:38.911 13047.622 - 13107.200: 80.0563% ( 39) 00:10:38.911 13107.200 - 13166.778: 80.2698% ( 22) 00:10:38.911 13166.778 - 13226.356: 80.4833% ( 22) 00:10:38.911 13226.356 - 13285.935: 80.7453% ( 27) 00:10:38.911 13285.935 - 13345.513: 80.9200% ( 18) 00:10:38.911 13345.513 - 13405.091: 81.1141% ( 20) 00:10:38.911 13405.091 - 13464.669: 81.3276% ( 22) 00:10:38.911 13464.669 - 13524.247: 81.5897% ( 27) 00:10:38.911 13524.247 - 13583.825: 81.7741% ( 19) 00:10:38.911 13583.825 - 13643.404: 82.0264% ( 26) 00:10:38.911 13643.404 - 13702.982: 82.3855% ( 37) 00:10:38.911 13702.982 - 13762.560: 82.7543% ( 38) 00:10:38.911 13762.560 - 13822.138: 83.1813% ( 44) 00:10:38.911 13822.138 - 13881.716: 83.5113% ( 34) 00:10:38.911 13881.716 - 13941.295: 84.1712% ( 68) 00:10:38.911 13941.295 - 14000.873: 84.5594% ( 40) 00:10:38.911 14000.873 - 14060.451: 84.9670% ( 42) 00:10:38.911 14060.451 - 14120.029: 85.5590% ( 61) 00:10:38.911 14120.029 - 14179.607: 86.0637% ( 52) 00:10:38.911 14179.607 - 14239.185: 86.7333% ( 69) 00:10:38.911 14239.185 - 14298.764: 87.3059% ( 59) 00:10:38.911 14298.764 - 14358.342: 87.6941% ( 40) 00:10:38.911 14358.342 - 14417.920: 88.0726% ( 39) 00:10:38.911 14417.920 - 14477.498: 88.5093% ( 45) 00:10:38.911 14477.498 - 14537.076: 88.8587% ( 36) 00:10:38.911 14537.076 - 14596.655: 89.2372% ( 39) 00:10:38.911 14596.655 - 14656.233: 89.5186% ( 29) 00:10:38.911 14656.233 - 14715.811: 89.7516% ( 24) 00:10:38.911 14715.811 - 14775.389: 89.9651% ( 22) 00:10:38.911 14775.389 - 14834.967: 90.2077% ( 25) 00:10:38.911 14834.967 - 14894.545: 90.5474% ( 35) 00:10:38.911 14894.545 - 14954.124: 90.7900% ( 25) 00:10:38.911 14954.124 - 15013.702: 91.0520% ( 27) 00:10:38.911 15013.702 - 15073.280: 91.2946% ( 25) 00:10:38.911 15073.280 - 15132.858: 91.5179% ( 23) 00:10:38.911 15132.858 - 15192.436: 91.7411% ( 23) 00:10:38.911 15192.436 - 15252.015: 91.9061% ( 17) 00:10:38.911 15252.015 - 15371.171: 92.1681% ( 27) 00:10:38.911 15371.171 - 15490.327: 92.4107% ( 25) 00:10:38.911 15490.327 - 15609.484: 92.6630% ( 26) 00:10:38.911 15609.484 - 15728.640: 92.9348% ( 28) 00:10:38.911 15728.640 - 15847.796: 93.1774% ( 25) 00:10:38.911 15847.796 - 15966.953: 93.3133% ( 14) 00:10:38.911 15966.953 - 16086.109: 93.4297% ( 12) 00:10:38.911 16086.109 - 16205.265: 93.5268% ( 10) 00:10:38.911 16205.265 - 16324.422: 93.7694% ( 25) 00:10:38.911 16324.422 - 16443.578: 93.9635% ( 20) 00:10:38.911 16443.578 - 16562.735: 94.1964% ( 24) 00:10:38.911 16562.735 - 16681.891: 94.4099% ( 22) 00:10:38.911 16681.891 - 16801.047: 94.5555% ( 15) 00:10:38.911 16801.047 - 16920.204: 94.8273% ( 28) 00:10:38.911 16920.204 - 17039.360: 95.0311% ( 21) 00:10:38.911 17039.360 - 17158.516: 95.2349% ( 21) 00:10:38.911 17158.516 - 17277.673: 95.4193% ( 19) 00:10:38.911 17277.673 - 17396.829: 95.5551% ( 14) 00:10:38.911 17396.829 - 17515.985: 95.6910% ( 14) 00:10:38.911 17515.985 - 17635.142: 95.7880% ( 10) 00:10:38.911 17635.142 - 17754.298: 95.9045% ( 12) 00:10:38.911 17754.298 - 17873.455: 96.0307% ( 13) 00:10:38.911 17873.455 - 17992.611: 96.1374% ( 11) 00:10:38.911 17992.611 - 18111.767: 96.1762% ( 4) 00:10:38.911 18111.767 - 18230.924: 96.2151% ( 4) 00:10:38.911 18230.924 - 18350.080: 96.2539% ( 4) 00:10:38.911 18350.080 - 18469.236: 96.2733% ( 2) 00:10:38.911 21805.615 - 21924.771: 96.2830% ( 1) 00:10:38.911 21924.771 - 22043.927: 96.3218% ( 4) 00:10:38.911 22043.927 - 22163.084: 96.3995% ( 8) 00:10:38.911 22163.084 - 22282.240: 96.5159% ( 12) 00:10:38.911 22282.240 - 22401.396: 96.6906% ( 18) 00:10:38.911 22401.396 - 22520.553: 96.9623% ( 28) 00:10:38.911 22520.553 - 22639.709: 97.1564% ( 20) 00:10:38.911 22639.709 - 22758.865: 97.3408% ( 19) 00:10:38.911 22758.865 - 22878.022: 97.4573% ( 12) 00:10:38.911 22878.022 - 22997.178: 97.5932% ( 14) 00:10:38.911 22997.178 - 23116.335: 97.6708% ( 8) 00:10:38.911 23116.335 - 23235.491: 97.8067% ( 14) 00:10:38.911 23235.491 - 23354.647: 97.9717% ( 17) 00:10:38.911 23354.647 - 23473.804: 98.2531% ( 29) 00:10:38.911 23473.804 - 23592.960: 98.3793% ( 13) 00:10:38.911 23592.960 - 23712.116: 98.5443% ( 17) 00:10:38.911 23712.116 - 23831.273: 98.6316% ( 9) 00:10:38.911 23831.273 - 23950.429: 98.6898% ( 6) 00:10:38.911 23950.429 - 24069.585: 98.7286% ( 4) 00:10:38.911 24069.585 - 24188.742: 98.7578% ( 3) 00:10:38.911 26214.400 - 26333.556: 98.7966% ( 4) 00:10:38.911 26333.556 - 26452.713: 98.8257% ( 3) 00:10:38.911 26452.713 - 26571.869: 98.8645% ( 4) 00:10:38.911 26571.869 - 26691.025: 98.9033% ( 4) 00:10:38.911 26691.025 - 26810.182: 98.9325% ( 3) 00:10:38.911 26810.182 - 26929.338: 98.9713% ( 4) 00:10:38.911 26929.338 - 27048.495: 99.0004% ( 3) 00:10:38.911 27048.495 - 27167.651: 99.0392% ( 4) 00:10:38.911 27167.651 - 27286.807: 99.0683% ( 3) 00:10:38.911 27286.807 - 27405.964: 99.1071% ( 4) 00:10:38.911 27405.964 - 27525.120: 99.1460% ( 4) 00:10:38.911 27525.120 - 27644.276: 99.1848% ( 4) 00:10:38.911 27644.276 - 27763.433: 99.2139% ( 3) 00:10:38.911 27763.433 - 27882.589: 99.2527% ( 4) 00:10:38.911 27882.589 - 28001.745: 99.2915% ( 4) 00:10:38.911 28001.745 - 28120.902: 99.3207% ( 3) 00:10:38.911 28120.902 - 28240.058: 99.3595% ( 4) 00:10:38.911 28240.058 - 28359.215: 99.3789% ( 2) 00:10:38.911 35031.971 - 35270.284: 99.4371% ( 6) 00:10:38.911 35270.284 - 35508.596: 99.5148% ( 8) 00:10:38.911 35508.596 - 35746.909: 99.5827% ( 7) 00:10:38.911 35746.909 - 35985.222: 99.6506% ( 7) 00:10:38.911 35985.222 - 36223.535: 99.7283% ( 8) 00:10:38.911 36223.535 - 36461.847: 99.7865% ( 6) 00:10:38.911 36461.847 - 36700.160: 99.8641% ( 8) 00:10:38.911 36700.160 - 36938.473: 99.9418% ( 8) 00:10:38.911 36938.473 - 37176.785: 100.0000% ( 6) 00:10:38.911 00:10:38.911 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:38.911 ============================================================================== 00:10:38.911 Range in us Cumulative IO count 00:10:38.911 9830.400 - 9889.978: 0.0096% ( 1) 00:10:38.911 9889.978 - 9949.556: 0.0289% ( 2) 00:10:38.911 9949.556 - 10009.135: 0.1061% ( 8) 00:10:38.911 10009.135 - 10068.713: 0.2508% ( 15) 00:10:38.911 10068.713 - 10128.291: 0.4244% ( 18) 00:10:38.911 10128.291 - 10187.869: 0.6655% ( 25) 00:10:38.911 10187.869 - 10247.447: 1.0513% ( 40) 00:10:38.911 10247.447 - 10307.025: 1.6493% ( 62) 00:10:38.911 10307.025 - 10366.604: 2.4691% ( 85) 00:10:38.911 10366.604 - 10426.182: 3.5590% ( 113) 00:10:38.911 10426.182 - 10485.760: 4.4078% ( 88) 00:10:38.911 10485.760 - 10545.338: 5.6231% ( 126) 00:10:38.911 10545.338 - 10604.916: 7.2627% ( 170) 00:10:38.911 10604.916 - 10664.495: 8.9120% ( 171) 00:10:38.911 10664.495 - 10724.073: 10.6578% ( 181) 00:10:38.911 10724.073 - 10783.651: 12.6350% ( 205) 00:10:38.911 10783.651 - 10843.229: 15.4032% ( 287) 00:10:38.911 10843.229 - 10902.807: 18.4703% ( 318) 00:10:38.911 10902.807 - 10962.385: 21.9232% ( 358) 00:10:38.911 10962.385 - 11021.964: 25.4147% ( 362) 00:10:38.911 11021.964 - 11081.542: 29.4174% ( 415) 00:10:38.911 11081.542 - 11141.120: 33.4201% ( 415) 00:10:38.911 11141.120 - 11200.698: 37.6157% ( 435) 00:10:38.911 11200.698 - 11260.276: 42.1200% ( 467) 00:10:38.911 11260.276 - 11319.855: 46.4892% ( 453) 00:10:38.911 11319.855 - 11379.433: 49.8843% ( 352) 00:10:38.911 11379.433 - 11439.011: 53.1057% ( 334) 00:10:38.911 11439.011 - 11498.589: 56.3947% ( 341) 00:10:38.911 11498.589 - 11558.167: 58.7963% ( 249) 00:10:38.911 11558.167 - 11617.745: 60.4842% ( 175) 00:10:38.911 11617.745 - 11677.324: 62.1335% ( 171) 00:10:38.911 11677.324 - 11736.902: 63.8117% ( 174) 00:10:38.911 11736.902 - 11796.480: 65.2488% ( 149) 00:10:38.911 11796.480 - 11856.058: 66.4448% ( 124) 00:10:38.911 11856.058 - 11915.636: 67.6022% ( 120) 00:10:38.911 11915.636 - 11975.215: 68.8754% ( 132) 00:10:38.911 11975.215 - 12034.793: 70.1871% ( 136) 00:10:38.911 12034.793 - 12094.371: 71.1323% ( 98) 00:10:38.911 12094.371 - 12153.949: 72.2704% ( 118) 00:10:38.911 12153.949 - 12213.527: 73.1771% ( 94) 00:10:38.911 12213.527 - 12273.105: 74.0451% ( 90) 00:10:38.911 12273.105 - 12332.684: 74.6335% ( 61) 00:10:38.911 12332.684 - 12392.262: 75.1447% ( 53) 00:10:38.911 12392.262 - 12451.840: 75.7330% ( 61) 00:10:38.911 12451.840 - 12511.418: 76.3117% ( 60) 00:10:38.911 12511.418 - 12570.996: 76.8326% ( 54) 00:10:38.911 12570.996 - 12630.575: 77.1316% ( 31) 00:10:38.912 12630.575 - 12690.153: 77.3534% ( 23) 00:10:38.912 12690.153 - 12749.731: 77.6042% ( 26) 00:10:38.912 12749.731 - 12809.309: 77.7874% ( 19) 00:10:38.912 12809.309 - 12868.887: 77.9514% ( 17) 00:10:38.912 12868.887 - 12928.465: 78.1057% ( 16) 00:10:38.912 12928.465 - 12988.044: 78.3565% ( 26) 00:10:38.912 12988.044 - 13047.622: 78.6362% ( 29) 00:10:38.912 13047.622 - 13107.200: 78.9931% ( 37) 00:10:38.912 13107.200 - 13166.778: 79.4560% ( 48) 00:10:38.912 13166.778 - 13226.356: 79.8804% ( 44) 00:10:38.912 13226.356 - 13285.935: 80.1698% ( 30) 00:10:38.912 13285.935 - 13345.513: 80.4109% ( 25) 00:10:38.912 13345.513 - 13405.091: 80.6809% ( 28) 00:10:38.912 13405.091 - 13464.669: 80.9606% ( 29) 00:10:38.912 13464.669 - 13524.247: 81.3272% ( 38) 00:10:38.912 13524.247 - 13583.825: 81.6551% ( 34) 00:10:38.912 13583.825 - 13643.404: 81.9155% ( 27) 00:10:38.912 13643.404 - 13702.982: 82.1277% ( 22) 00:10:38.912 13702.982 - 13762.560: 82.4556% ( 34) 00:10:38.912 13762.560 - 13822.138: 82.8993% ( 46) 00:10:38.912 13822.138 - 13881.716: 83.2465% ( 36) 00:10:38.912 13881.716 - 13941.295: 83.6902% ( 46) 00:10:38.912 13941.295 - 14000.873: 84.1049% ( 43) 00:10:38.912 14000.873 - 14060.451: 84.5775% ( 49) 00:10:38.912 14060.451 - 14120.029: 85.1852% ( 63) 00:10:38.912 14120.029 - 14179.607: 85.7446% ( 58) 00:10:38.912 14179.607 - 14239.185: 86.2365% ( 51) 00:10:38.912 14239.185 - 14298.764: 86.7959% ( 58) 00:10:38.912 14298.764 - 14358.342: 87.3746% ( 60) 00:10:38.912 14358.342 - 14417.920: 87.9437% ( 59) 00:10:38.912 14417.920 - 14477.498: 88.4742% ( 55) 00:10:38.912 14477.498 - 14537.076: 88.9082% ( 45) 00:10:38.912 14537.076 - 14596.655: 89.2458% ( 35) 00:10:38.912 14596.655 - 14656.233: 89.6412% ( 41) 00:10:38.912 14656.233 - 14715.811: 90.0463% ( 42) 00:10:38.912 14715.811 - 14775.389: 90.4417% ( 41) 00:10:38.912 14775.389 - 14834.967: 90.7986% ( 37) 00:10:38.912 14834.967 - 14894.545: 91.1555% ( 37) 00:10:38.912 14894.545 - 14954.124: 91.4931% ( 35) 00:10:38.912 14954.124 - 15013.702: 91.7052% ( 22) 00:10:38.912 15013.702 - 15073.280: 91.8789% ( 18) 00:10:38.912 15073.280 - 15132.858: 92.0235% ( 15) 00:10:38.912 15132.858 - 15192.436: 92.1007% ( 8) 00:10:38.912 15192.436 - 15252.015: 92.1875% ( 9) 00:10:38.912 15252.015 - 15371.171: 92.3708% ( 19) 00:10:38.912 15371.171 - 15490.327: 92.4672% ( 10) 00:10:38.912 15490.327 - 15609.484: 92.6698% ( 21) 00:10:38.912 15609.484 - 15728.640: 92.9880% ( 33) 00:10:38.912 15728.640 - 15847.796: 93.1809% ( 20) 00:10:38.912 15847.796 - 15966.953: 93.3835% ( 21) 00:10:38.912 15966.953 - 16086.109: 93.4799% ( 10) 00:10:38.912 16086.109 - 16205.265: 93.6535% ( 18) 00:10:38.912 16205.265 - 16324.422: 93.7982% ( 15) 00:10:38.912 16324.422 - 16443.578: 93.9911% ( 20) 00:10:38.912 16443.578 - 16562.735: 94.2419% ( 26) 00:10:38.912 16562.735 - 16681.891: 94.3576% ( 12) 00:10:38.912 16681.891 - 16801.047: 94.5891% ( 24) 00:10:38.912 16801.047 - 16920.204: 94.7242% ( 14) 00:10:38.912 16920.204 - 17039.360: 94.9074% ( 19) 00:10:38.912 17039.360 - 17158.516: 95.0907% ( 19) 00:10:38.912 17158.516 - 17277.673: 95.2932% ( 21) 00:10:38.912 17277.673 - 17396.829: 95.4572% ( 17) 00:10:38.912 17396.829 - 17515.985: 95.5826% ( 13) 00:10:38.912 17515.985 - 17635.142: 95.7176% ( 14) 00:10:38.912 17635.142 - 17754.298: 95.8526% ( 14) 00:10:38.912 17754.298 - 17873.455: 96.0069% ( 16) 00:10:38.912 17873.455 - 17992.611: 96.1806% ( 18) 00:10:38.912 17992.611 - 18111.767: 96.3156% ( 14) 00:10:38.912 18111.767 - 18230.924: 96.3831% ( 7) 00:10:38.912 18230.924 - 18350.080: 96.4603% ( 8) 00:10:38.912 18350.080 - 18469.236: 96.5374% ( 8) 00:10:38.912 18469.236 - 18588.393: 96.6049% ( 7) 00:10:38.912 18588.393 - 18707.549: 96.6339% ( 3) 00:10:38.912 18707.549 - 18826.705: 96.6725% ( 4) 00:10:38.912 18826.705 - 18945.862: 96.7110% ( 4) 00:10:38.912 18945.862 - 19065.018: 96.7400% ( 3) 00:10:38.912 19065.018 - 19184.175: 96.7785% ( 4) 00:10:38.912 19184.175 - 19303.331: 96.8171% ( 4) 00:10:38.912 19303.331 - 19422.487: 96.8557% ( 4) 00:10:38.912 19422.487 - 19541.644: 96.8943% ( 4) 00:10:38.912 19541.644 - 19660.800: 96.9136% ( 2) 00:10:38.912 21686.458 - 21805.615: 96.9715% ( 6) 00:10:38.912 21805.615 - 21924.771: 97.0486% ( 8) 00:10:38.912 21924.771 - 22043.927: 97.1161% ( 7) 00:10:38.912 22043.927 - 22163.084: 97.2994% ( 19) 00:10:38.912 22163.084 - 22282.240: 97.4923% ( 20) 00:10:38.912 22282.240 - 22401.396: 97.7238% ( 24) 00:10:38.912 22401.396 - 22520.553: 98.0999% ( 39) 00:10:38.912 22520.553 - 22639.709: 98.4086% ( 32) 00:10:38.912 22639.709 - 22758.865: 98.5725% ( 17) 00:10:38.912 22758.865 - 22878.022: 98.6690% ( 10) 00:10:38.912 22878.022 - 22997.178: 98.7365% ( 7) 00:10:38.912 22997.178 - 23116.335: 98.7654% ( 3) 00:10:38.912 23354.647 - 23473.804: 98.8040% ( 4) 00:10:38.912 23473.804 - 23592.960: 98.8908% ( 9) 00:10:38.912 23592.960 - 23712.116: 99.0258% ( 14) 00:10:38.912 23712.116 - 23831.273: 99.2573% ( 24) 00:10:38.912 23831.273 - 23950.429: 99.3248% ( 7) 00:10:38.912 23950.429 - 24069.585: 99.3731% ( 5) 00:10:38.912 24069.585 - 24188.742: 99.3827% ( 1) 00:10:38.912 25499.462 - 25618.618: 99.4020% ( 2) 00:10:38.912 25618.618 - 25737.775: 99.4309% ( 3) 00:10:38.912 25737.775 - 25856.931: 99.4695% ( 4) 00:10:38.912 25856.931 - 25976.087: 99.5081% ( 4) 00:10:38.912 25976.087 - 26095.244: 99.5370% ( 3) 00:10:38.912 26095.244 - 26214.400: 99.5756% ( 4) 00:10:38.912 26214.400 - 26333.556: 99.6142% ( 4) 00:10:38.912 26333.556 - 26452.713: 99.6528% ( 4) 00:10:38.912 26452.713 - 26571.869: 99.6914% ( 4) 00:10:38.912 26571.869 - 26691.025: 99.7203% ( 3) 00:10:38.912 26691.025 - 26810.182: 99.7589% ( 4) 00:10:38.912 26810.182 - 26929.338: 99.7878% ( 3) 00:10:38.912 26929.338 - 27048.495: 99.8264% ( 4) 00:10:38.912 27048.495 - 27167.651: 99.8650% ( 4) 00:10:38.912 27167.651 - 27286.807: 99.8939% ( 3) 00:10:38.912 27286.807 - 27405.964: 99.9325% ( 4) 00:10:38.912 27405.964 - 27525.120: 99.9711% ( 4) 00:10:38.912 27525.120 - 27644.276: 100.0000% ( 3) 00:10:38.912 00:10:38.912 18:12:13 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:38.912 00:10:38.912 real 0m2.793s 00:10:38.912 user 0m2.332s 00:10:38.912 sys 0m0.346s 00:10:38.912 18:12:13 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.912 18:12:13 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:38.912 ************************************ 00:10:38.912 END TEST nvme_perf 00:10:38.912 ************************************ 00:10:38.912 18:12:13 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:38.912 18:12:13 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:38.912 18:12:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.912 18:12:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:38.912 ************************************ 00:10:38.912 START TEST nvme_hello_world 00:10:38.912 ************************************ 00:10:38.912 18:12:13 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:39.170 Initializing NVMe Controllers 00:10:39.170 Attached to 0000:00:10.0 00:10:39.170 Namespace ID: 1 size: 6GB 00:10:39.170 Attached to 0000:00:11.0 00:10:39.170 Namespace ID: 1 size: 5GB 00:10:39.170 Attached to 0000:00:13.0 00:10:39.170 Namespace ID: 1 size: 1GB 00:10:39.170 Attached to 0000:00:12.0 00:10:39.170 Namespace ID: 1 size: 4GB 00:10:39.170 Namespace ID: 2 size: 4GB 00:10:39.170 Namespace ID: 3 size: 4GB 00:10:39.170 Initialization complete. 00:10:39.170 INFO: using host memory buffer for IO 00:10:39.170 Hello world! 00:10:39.170 INFO: using host memory buffer for IO 00:10:39.170 Hello world! 00:10:39.170 INFO: using host memory buffer for IO 00:10:39.170 Hello world! 00:10:39.170 INFO: using host memory buffer for IO 00:10:39.170 Hello world! 00:10:39.170 INFO: using host memory buffer for IO 00:10:39.170 Hello world! 00:10:39.170 INFO: using host memory buffer for IO 00:10:39.170 Hello world! 00:10:39.429 00:10:39.429 real 0m0.375s 00:10:39.429 user 0m0.172s 00:10:39.429 sys 0m0.153s 00:10:39.429 ************************************ 00:10:39.429 END TEST nvme_hello_world 00:10:39.429 ************************************ 00:10:39.429 18:12:13 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.429 18:12:13 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:39.429 18:12:13 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:39.429 18:12:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:39.429 18:12:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.429 18:12:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:39.429 ************************************ 00:10:39.429 START TEST nvme_sgl 00:10:39.429 ************************************ 00:10:39.429 18:12:13 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:39.687 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:39.687 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:39.687 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:39.687 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:39.687 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:39.687 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:39.687 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:39.687 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:39.687 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:39.687 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:39.687 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:39.687 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:39.687 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:39.687 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:39.687 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:39.687 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:39.687 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:39.687 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:39.687 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:39.687 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:39.687 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:39.687 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:39.687 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:39.687 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:39.687 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:39.687 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:39.687 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:39.687 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:39.687 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:39.687 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:39.687 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:39.687 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:39.687 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:39.687 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:39.687 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:39.687 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:39.687 NVMe Readv/Writev Request test 00:10:39.687 Attached to 0000:00:10.0 00:10:39.687 Attached to 0000:00:11.0 00:10:39.687 Attached to 0000:00:13.0 00:10:39.687 Attached to 0000:00:12.0 00:10:39.687 0000:00:10.0: build_io_request_2 test passed 00:10:39.687 0000:00:10.0: build_io_request_4 test passed 00:10:39.687 0000:00:10.0: build_io_request_5 test passed 00:10:39.687 0000:00:10.0: build_io_request_6 test passed 00:10:39.687 0000:00:10.0: build_io_request_7 test passed 00:10:39.687 0000:00:10.0: build_io_request_10 test passed 00:10:39.687 0000:00:11.0: build_io_request_2 test passed 00:10:39.687 0000:00:11.0: build_io_request_4 test passed 00:10:39.687 0000:00:11.0: build_io_request_5 test passed 00:10:39.687 0000:00:11.0: build_io_request_6 test passed 00:10:39.687 0000:00:11.0: build_io_request_7 test passed 00:10:39.687 0000:00:11.0: build_io_request_10 test passed 00:10:39.687 Cleaning up... 00:10:39.687 00:10:39.687 real 0m0.451s 00:10:39.687 user 0m0.214s 00:10:39.687 sys 0m0.187s 00:10:39.687 18:12:14 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.687 18:12:14 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:39.687 ************************************ 00:10:39.687 END TEST nvme_sgl 00:10:39.687 ************************************ 00:10:39.946 18:12:14 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:39.946 18:12:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:39.946 18:12:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.946 18:12:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:39.946 ************************************ 00:10:39.946 START TEST nvme_e2edp 00:10:39.946 ************************************ 00:10:39.946 18:12:14 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:40.203 NVMe Write/Read with End-to-End data protection test 00:10:40.203 Attached to 0000:00:10.0 00:10:40.203 Attached to 0000:00:11.0 00:10:40.203 Attached to 0000:00:13.0 00:10:40.203 Attached to 0000:00:12.0 00:10:40.203 Cleaning up... 00:10:40.203 ************************************ 00:10:40.203 END TEST nvme_e2edp 00:10:40.203 ************************************ 00:10:40.203 00:10:40.203 real 0m0.376s 00:10:40.203 user 0m0.136s 00:10:40.203 sys 0m0.190s 00:10:40.203 18:12:14 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.203 18:12:14 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:40.203 18:12:14 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:40.203 18:12:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:40.203 18:12:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.203 18:12:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:40.203 ************************************ 00:10:40.203 START TEST nvme_reserve 00:10:40.203 ************************************ 00:10:40.203 18:12:14 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:40.770 ===================================================== 00:10:40.770 NVMe Controller at PCI bus 0, device 16, function 0 00:10:40.770 ===================================================== 00:10:40.770 Reservations: Not Supported 00:10:40.770 ===================================================== 00:10:40.770 NVMe Controller at PCI bus 0, device 17, function 0 00:10:40.770 ===================================================== 00:10:40.770 Reservations: Not Supported 00:10:40.770 ===================================================== 00:10:40.770 NVMe Controller at PCI bus 0, device 19, function 0 00:10:40.770 ===================================================== 00:10:40.770 Reservations: Not Supported 00:10:40.770 ===================================================== 00:10:40.770 NVMe Controller at PCI bus 0, device 18, function 0 00:10:40.770 ===================================================== 00:10:40.770 Reservations: Not Supported 00:10:40.770 Reservation test passed 00:10:40.770 00:10:40.770 real 0m0.345s 00:10:40.770 user 0m0.131s 00:10:40.770 sys 0m0.169s 00:10:40.770 ************************************ 00:10:40.770 END TEST nvme_reserve 00:10:40.770 ************************************ 00:10:40.770 18:12:14 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.770 18:12:14 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:10:40.770 18:12:14 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:40.770 18:12:14 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:40.770 18:12:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.770 18:12:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:40.770 ************************************ 00:10:40.770 START TEST nvme_err_injection 00:10:40.770 ************************************ 00:10:40.770 18:12:14 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:41.030 NVMe Error Injection test 00:10:41.030 Attached to 0000:00:10.0 00:10:41.030 Attached to 0000:00:11.0 00:10:41.030 Attached to 0000:00:13.0 00:10:41.030 Attached to 0000:00:12.0 00:10:41.030 0000:00:11.0: get features failed as expected 00:10:41.030 0000:00:13.0: get features failed as expected 00:10:41.030 0000:00:12.0: get features failed as expected 00:10:41.030 0000:00:10.0: get features failed as expected 00:10:41.030 0000:00:10.0: get features successfully as expected 00:10:41.030 0000:00:11.0: get features successfully as expected 00:10:41.030 0000:00:13.0: get features successfully as expected 00:10:41.030 0000:00:12.0: get features successfully as expected 00:10:41.030 0000:00:11.0: read failed as expected 00:10:41.030 0000:00:10.0: read failed as expected 00:10:41.030 0000:00:13.0: read failed as expected 00:10:41.030 0000:00:12.0: read failed as expected 00:10:41.030 0000:00:10.0: read successfully as expected 00:10:41.030 0000:00:11.0: read successfully as expected 00:10:41.030 0000:00:13.0: read successfully as expected 00:10:41.030 0000:00:12.0: read successfully as expected 00:10:41.030 Cleaning up... 00:10:41.030 00:10:41.030 real 0m0.396s 00:10:41.030 user 0m0.158s 00:10:41.030 sys 0m0.190s 00:10:41.030 ************************************ 00:10:41.030 END TEST nvme_err_injection 00:10:41.030 ************************************ 00:10:41.030 18:12:15 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.030 18:12:15 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:10:41.030 18:12:15 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:41.030 18:12:15 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:10:41.030 18:12:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.030 18:12:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:41.030 ************************************ 00:10:41.030 START TEST nvme_overhead 00:10:41.030 ************************************ 00:10:41.030 18:12:15 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:42.405 Initializing NVMe Controllers 00:10:42.405 Attached to 0000:00:10.0 00:10:42.405 Attached to 0000:00:11.0 00:10:42.405 Attached to 0000:00:13.0 00:10:42.405 Attached to 0000:00:12.0 00:10:42.405 Initialization complete. Launching workers. 00:10:42.405 submit (in ns) avg, min, max = 15140.4, 12628.6, 561657.3 00:10:42.405 complete (in ns) avg, min, max = 10941.5, 9208.6, 74414.5 00:10:42.405 00:10:42.405 Submit histogram 00:10:42.405 ================ 00:10:42.405 Range in us Cumulative Count 00:10:42.405 12.625 - 12.684: 0.0107% ( 1) 00:10:42.405 12.742 - 12.800: 0.0214% ( 1) 00:10:42.405 12.858 - 12.916: 0.0320% ( 1) 00:10:42.405 12.975 - 13.033: 0.0427% ( 1) 00:10:42.405 13.091 - 13.149: 0.0748% ( 3) 00:10:42.405 13.149 - 13.207: 0.2243% ( 14) 00:10:42.405 13.207 - 13.265: 0.6943% ( 44) 00:10:42.405 13.265 - 13.324: 2.2111% ( 142) 00:10:42.405 13.324 - 13.382: 5.3728% ( 296) 00:10:42.405 13.382 - 13.440: 10.4785% ( 478) 00:10:42.405 13.440 - 13.498: 16.9622% ( 607) 00:10:42.405 13.498 - 13.556: 23.1788% ( 582) 00:10:42.405 13.556 - 13.615: 29.0429% ( 549) 00:10:42.405 13.615 - 13.673: 33.6787% ( 434) 00:10:42.405 13.673 - 13.731: 37.5668% ( 364) 00:10:42.405 13.731 - 13.789: 41.0489% ( 326) 00:10:42.405 13.789 - 13.847: 44.0397% ( 280) 00:10:42.405 13.847 - 13.905: 46.7956% ( 258) 00:10:42.405 13.905 - 13.964: 49.0173% ( 208) 00:10:42.405 13.964 - 14.022: 50.9934% ( 185) 00:10:42.405 14.022 - 14.080: 52.3713% ( 129) 00:10:42.405 14.080 - 14.138: 53.3219% ( 89) 00:10:42.405 14.138 - 14.196: 54.0162% ( 65) 00:10:42.405 14.196 - 14.255: 54.8494% ( 78) 00:10:42.405 14.255 - 14.313: 55.6719% ( 77) 00:10:42.405 14.313 - 14.371: 56.3555% ( 64) 00:10:42.405 14.371 - 14.429: 57.3489% ( 93) 00:10:42.405 14.429 - 14.487: 58.5238% ( 110) 00:10:42.405 14.487 - 14.545: 59.8483% ( 124) 00:10:42.405 14.545 - 14.604: 61.1515% ( 122) 00:10:42.405 14.604 - 14.662: 62.3264% ( 110) 00:10:42.405 14.662 - 14.720: 63.3305% ( 94) 00:10:42.405 14.720 - 14.778: 64.2384% ( 85) 00:10:42.405 14.778 - 14.836: 65.0182% ( 73) 00:10:42.405 14.836 - 14.895: 65.5309% ( 48) 00:10:42.405 14.895 - 15.011: 66.7165% ( 111) 00:10:42.405 15.011 - 15.127: 67.6031% ( 83) 00:10:42.405 15.127 - 15.244: 68.2012% ( 56) 00:10:42.405 15.244 - 15.360: 68.6178% ( 39) 00:10:42.405 15.360 - 15.476: 68.9917% ( 35) 00:10:42.405 15.476 - 15.593: 69.2587% ( 25) 00:10:42.405 15.593 - 15.709: 69.5791% ( 30) 00:10:42.405 15.709 - 15.825: 69.8996% ( 30) 00:10:42.405 15.825 - 15.942: 70.2200% ( 30) 00:10:42.405 15.942 - 16.058: 70.3696% ( 14) 00:10:42.405 16.058 - 16.175: 70.7114% ( 32) 00:10:42.405 16.175 - 16.291: 71.7795% ( 100) 00:10:42.405 16.291 - 16.407: 75.7744% ( 374) 00:10:42.405 16.407 - 16.524: 81.1045% ( 499) 00:10:42.405 16.524 - 16.640: 85.0566% ( 370) 00:10:42.405 16.640 - 16.756: 87.7270% ( 250) 00:10:42.405 16.756 - 16.873: 89.4894% ( 165) 00:10:42.405 16.873 - 16.989: 90.7712% ( 120) 00:10:42.405 16.989 - 17.105: 91.6044% ( 78) 00:10:42.405 17.105 - 17.222: 92.2239% ( 58) 00:10:42.405 17.222 - 17.338: 92.6725% ( 42) 00:10:42.405 17.338 - 17.455: 92.9930% ( 30) 00:10:42.405 17.455 - 17.571: 93.2707% ( 26) 00:10:42.405 17.571 - 17.687: 93.5057% ( 22) 00:10:42.405 17.687 - 17.804: 93.7727% ( 25) 00:10:42.405 17.804 - 17.920: 93.9436% ( 16) 00:10:42.405 17.920 - 18.036: 94.1465% ( 19) 00:10:42.405 18.036 - 18.153: 94.2747% ( 12) 00:10:42.405 18.153 - 18.269: 94.4243% ( 14) 00:10:42.405 18.269 - 18.385: 94.5097% ( 8) 00:10:42.405 18.385 - 18.502: 94.5631% ( 5) 00:10:42.405 18.502 - 18.618: 94.5952% ( 3) 00:10:42.405 18.618 - 18.735: 94.7020% ( 10) 00:10:42.405 18.735 - 18.851: 94.8302% ( 12) 00:10:42.405 18.851 - 18.967: 94.8836% ( 5) 00:10:42.405 18.967 - 19.084: 95.0117% ( 12) 00:10:42.405 19.084 - 19.200: 95.1079% ( 9) 00:10:42.405 19.200 - 19.316: 95.2254% ( 11) 00:10:42.405 19.316 - 19.433: 95.3108% ( 8) 00:10:42.405 19.433 - 19.549: 95.4497% ( 13) 00:10:42.405 19.549 - 19.665: 95.5458% ( 9) 00:10:42.405 19.665 - 19.782: 95.6526% ( 10) 00:10:42.405 19.782 - 19.898: 95.8022% ( 14) 00:10:42.405 19.898 - 20.015: 95.9090% ( 10) 00:10:42.405 20.015 - 20.131: 95.9731% ( 6) 00:10:42.405 20.131 - 20.247: 96.0906% ( 11) 00:10:42.405 20.247 - 20.364: 96.2188% ( 12) 00:10:42.405 20.364 - 20.480: 96.2935% ( 7) 00:10:42.405 20.480 - 20.596: 96.4110% ( 11) 00:10:42.405 20.596 - 20.713: 96.4644% ( 5) 00:10:42.405 20.713 - 20.829: 96.5392% ( 7) 00:10:42.405 20.829 - 20.945: 96.6460% ( 10) 00:10:42.405 20.945 - 21.062: 96.6887% ( 4) 00:10:42.405 21.062 - 21.178: 96.7528% ( 6) 00:10:42.405 21.178 - 21.295: 96.8276% ( 7) 00:10:42.405 21.295 - 21.411: 96.9024% ( 7) 00:10:42.405 21.411 - 21.527: 97.0626% ( 15) 00:10:42.405 21.527 - 21.644: 97.1160% ( 5) 00:10:42.405 21.644 - 21.760: 97.1908% ( 7) 00:10:42.405 21.760 - 21.876: 97.2976% ( 10) 00:10:42.405 21.876 - 21.993: 97.3189% ( 2) 00:10:42.405 21.993 - 22.109: 97.4044% ( 8) 00:10:42.405 22.109 - 22.225: 97.4578% ( 5) 00:10:42.405 22.225 - 22.342: 97.5539% ( 9) 00:10:42.405 22.342 - 22.458: 97.6501% ( 9) 00:10:42.405 22.458 - 22.575: 97.7355% ( 8) 00:10:42.405 22.575 - 22.691: 97.8210% ( 8) 00:10:42.405 22.691 - 22.807: 97.8530% ( 3) 00:10:42.405 22.807 - 22.924: 97.9705% ( 11) 00:10:42.405 22.924 - 23.040: 98.0453% ( 7) 00:10:42.405 23.040 - 23.156: 98.1201% ( 7) 00:10:42.405 23.156 - 23.273: 98.1948% ( 7) 00:10:42.405 23.273 - 23.389: 98.3016% ( 10) 00:10:42.405 23.389 - 23.505: 98.3871% ( 8) 00:10:42.405 23.505 - 23.622: 98.4298% ( 4) 00:10:42.405 23.622 - 23.738: 98.4939% ( 6) 00:10:42.405 23.738 - 23.855: 98.5046% ( 1) 00:10:42.405 23.855 - 23.971: 98.5687% ( 6) 00:10:42.405 23.971 - 24.087: 98.6007% ( 3) 00:10:42.405 24.087 - 24.204: 98.6221% ( 2) 00:10:42.405 24.204 - 24.320: 98.6862% ( 6) 00:10:42.405 24.320 - 24.436: 98.7289% ( 4) 00:10:42.405 24.436 - 24.553: 98.7716% ( 4) 00:10:42.405 24.553 - 24.669: 98.8784% ( 10) 00:10:42.405 24.669 - 24.785: 98.8998% ( 2) 00:10:42.405 24.785 - 24.902: 98.9319% ( 3) 00:10:42.405 24.902 - 25.018: 98.9425% ( 1) 00:10:42.405 25.018 - 25.135: 98.9959% ( 5) 00:10:42.405 25.135 - 25.251: 99.0066% ( 1) 00:10:42.405 25.251 - 25.367: 99.0707% ( 6) 00:10:42.405 25.367 - 25.484: 99.1028% ( 3) 00:10:42.405 25.484 - 25.600: 99.1241% ( 2) 00:10:42.405 25.600 - 25.716: 99.1562% ( 3) 00:10:42.405 25.716 - 25.833: 99.1882% ( 3) 00:10:42.405 25.833 - 25.949: 99.2096% ( 2) 00:10:42.405 25.949 - 26.065: 99.2309% ( 2) 00:10:42.405 26.065 - 26.182: 99.2737% ( 4) 00:10:42.405 26.182 - 26.298: 99.2843% ( 1) 00:10:42.405 26.298 - 26.415: 99.3057% ( 2) 00:10:42.405 26.415 - 26.531: 99.3271% ( 2) 00:10:42.405 26.531 - 26.647: 99.3591% ( 3) 00:10:42.405 26.647 - 26.764: 99.4018% ( 4) 00:10:42.405 26.764 - 26.880: 99.4125% ( 1) 00:10:42.405 26.880 - 26.996: 99.4446% ( 3) 00:10:42.405 26.996 - 27.113: 99.4659% ( 2) 00:10:42.405 27.113 - 27.229: 99.4766% ( 1) 00:10:42.405 27.229 - 27.345: 99.4980% ( 2) 00:10:42.405 27.345 - 27.462: 99.5300% ( 3) 00:10:42.405 27.462 - 27.578: 99.5407% ( 1) 00:10:42.405 27.578 - 27.695: 99.5621% ( 2) 00:10:42.405 27.695 - 27.811: 99.5727% ( 1) 00:10:42.405 27.811 - 27.927: 99.5834% ( 1) 00:10:42.405 28.276 - 28.393: 99.5941% ( 1) 00:10:42.405 28.742 - 28.858: 99.6048% ( 1) 00:10:42.405 28.858 - 28.975: 99.6155% ( 1) 00:10:42.405 28.975 - 29.091: 99.6261% ( 1) 00:10:42.405 29.091 - 29.207: 99.6368% ( 1) 00:10:42.405 29.207 - 29.324: 99.6582% ( 2) 00:10:42.405 29.324 - 29.440: 99.6796% ( 2) 00:10:42.405 29.673 - 29.789: 99.6902% ( 1) 00:10:42.405 29.789 - 30.022: 99.7009% ( 1) 00:10:42.405 30.255 - 30.487: 99.7223% ( 2) 00:10:42.405 31.185 - 31.418: 99.7330% ( 1) 00:10:42.405 31.651 - 31.884: 99.7436% ( 1) 00:10:42.405 32.116 - 32.349: 99.7543% ( 1) 00:10:42.405 33.047 - 33.280: 99.7650% ( 1) 00:10:42.405 33.513 - 33.745: 99.7864% ( 2) 00:10:42.405 33.745 - 33.978: 99.7971% ( 1) 00:10:42.405 33.978 - 34.211: 99.8077% ( 1) 00:10:42.405 34.909 - 35.142: 99.8184% ( 1) 00:10:42.405 35.142 - 35.375: 99.8291% ( 1) 00:10:42.405 35.607 - 35.840: 99.8398% ( 1) 00:10:42.405 36.538 - 36.771: 99.8505% ( 1) 00:10:42.405 37.236 - 37.469: 99.8611% ( 1) 00:10:42.406 38.400 - 38.633: 99.8718% ( 1) 00:10:42.406 40.262 - 40.495: 99.8825% ( 1) 00:10:42.406 41.425 - 41.658: 99.8932% ( 1) 00:10:42.406 41.891 - 42.124: 99.9039% ( 1) 00:10:42.406 43.055 - 43.287: 99.9145% ( 1) 00:10:42.406 47.709 - 47.942: 99.9252% ( 1) 00:10:42.406 49.571 - 49.804: 99.9359% ( 1) 00:10:42.406 66.095 - 66.560: 99.9466% ( 1) 00:10:42.406 83.316 - 83.782: 99.9573% ( 1) 00:10:42.406 84.247 - 84.713: 99.9680% ( 1) 00:10:42.406 91.229 - 91.695: 99.9786% ( 1) 00:10:42.406 100.073 - 100.538: 99.9893% ( 1) 00:10:42.406 558.545 - 562.269: 100.0000% ( 1) 00:10:42.406 00:10:42.406 Complete histogram 00:10:42.406 ================== 00:10:42.406 Range in us Cumulative Count 00:10:42.406 9.193 - 9.251: 0.0107% ( 1) 00:10:42.406 9.251 - 9.309: 0.0214% ( 1) 00:10:42.406 9.309 - 9.367: 0.0320% ( 1) 00:10:42.406 9.367 - 9.425: 0.0534% ( 2) 00:10:42.406 9.425 - 9.484: 0.2350% ( 17) 00:10:42.406 9.484 - 9.542: 0.8438% ( 57) 00:10:42.406 9.542 - 9.600: 2.3713% ( 143) 00:10:42.406 9.600 - 9.658: 4.6571% ( 214) 00:10:42.406 9.658 - 9.716: 9.6240% ( 465) 00:10:42.406 9.716 - 9.775: 19.0771% ( 885) 00:10:42.406 9.775 - 9.833: 32.1192% ( 1221) 00:10:42.406 9.833 - 9.891: 43.7727% ( 1091) 00:10:42.406 9.891 - 9.949: 51.2177% ( 697) 00:10:42.406 9.949 - 10.007: 55.5330% ( 404) 00:10:42.406 10.007 - 10.065: 57.9150% ( 223) 00:10:42.406 10.065 - 10.124: 59.5172% ( 150) 00:10:42.406 10.124 - 10.182: 60.7135% ( 112) 00:10:42.406 10.182 - 10.240: 61.4292% ( 67) 00:10:42.406 10.240 - 10.298: 62.0167% ( 55) 00:10:42.406 10.298 - 10.356: 62.7003% ( 64) 00:10:42.406 10.356 - 10.415: 63.5121% ( 76) 00:10:42.406 10.415 - 10.473: 64.3239% ( 76) 00:10:42.406 10.473 - 10.531: 65.2959% ( 91) 00:10:42.406 10.531 - 10.589: 66.0756% ( 73) 00:10:42.406 10.589 - 10.647: 66.5670% ( 46) 00:10:42.406 10.647 - 10.705: 67.1010% ( 50) 00:10:42.406 10.705 - 10.764: 67.4749% ( 35) 00:10:42.406 10.764 - 10.822: 67.8167% ( 32) 00:10:42.406 10.822 - 10.880: 68.0944% ( 26) 00:10:42.406 10.880 - 10.938: 68.2546% ( 15) 00:10:42.406 10.938 - 10.996: 68.3721% ( 11) 00:10:42.406 10.996 - 11.055: 68.4683% ( 9) 00:10:42.406 11.055 - 11.113: 68.6071% ( 13) 00:10:42.406 11.113 - 11.171: 68.6926% ( 8) 00:10:42.406 11.171 - 11.229: 68.7674% ( 7) 00:10:42.406 11.229 - 11.287: 68.8208% ( 5) 00:10:42.406 11.287 - 11.345: 68.8742% ( 5) 00:10:42.406 11.345 - 11.404: 68.9810% ( 10) 00:10:42.406 11.404 - 11.462: 69.0344% ( 5) 00:10:42.406 11.462 - 11.520: 69.1519% ( 11) 00:10:42.406 11.520 - 11.578: 69.2053% ( 5) 00:10:42.406 11.578 - 11.636: 69.3335% ( 12) 00:10:42.406 11.636 - 11.695: 69.3976% ( 6) 00:10:42.406 11.695 - 11.753: 69.4830% ( 8) 00:10:42.406 11.753 - 11.811: 69.5791% ( 9) 00:10:42.406 11.811 - 11.869: 69.7714% ( 18) 00:10:42.406 11.869 - 11.927: 70.7007% ( 87) 00:10:42.406 11.927 - 11.985: 73.1361% ( 228) 00:10:42.406 11.985 - 12.044: 77.5475% ( 413) 00:10:42.406 12.044 - 12.102: 81.8094% ( 399) 00:10:42.406 12.102 - 12.160: 85.0352% ( 302) 00:10:42.406 12.160 - 12.218: 87.1395% ( 197) 00:10:42.406 12.218 - 12.276: 88.5067% ( 128) 00:10:42.406 12.276 - 12.335: 89.2972% ( 74) 00:10:42.406 12.335 - 12.393: 89.8740% ( 54) 00:10:42.406 12.393 - 12.451: 90.3653% ( 46) 00:10:42.406 12.451 - 12.509: 90.8567% ( 46) 00:10:42.406 12.509 - 12.567: 91.1451% ( 27) 00:10:42.406 12.567 - 12.625: 91.3266% ( 17) 00:10:42.406 12.625 - 12.684: 91.6364% ( 29) 00:10:42.406 12.684 - 12.742: 92.1064% ( 44) 00:10:42.406 12.742 - 12.800: 92.5977% ( 46) 00:10:42.406 12.800 - 12.858: 92.9395% ( 32) 00:10:42.406 12.858 - 12.916: 93.3241% ( 36) 00:10:42.406 12.916 - 12.975: 93.6872% ( 34) 00:10:42.406 12.975 - 13.033: 93.9116% ( 21) 00:10:42.406 13.033 - 13.091: 94.1679% ( 24) 00:10:42.406 13.091 - 13.149: 94.2747% ( 10) 00:10:42.406 13.149 - 13.207: 94.3495% ( 7) 00:10:42.406 13.207 - 13.265: 94.4243% ( 7) 00:10:42.406 13.265 - 13.324: 94.5097% ( 8) 00:10:42.406 13.324 - 13.382: 94.5311% ( 2) 00:10:42.406 13.382 - 13.440: 94.5845% ( 5) 00:10:42.406 13.440 - 13.498: 94.6272% ( 4) 00:10:42.406 13.498 - 13.556: 94.6913% ( 6) 00:10:42.406 13.556 - 13.615: 94.7447% ( 5) 00:10:42.406 13.615 - 13.673: 94.7874% ( 4) 00:10:42.406 13.673 - 13.731: 94.8088% ( 2) 00:10:42.406 13.731 - 13.789: 94.8408% ( 3) 00:10:42.406 13.789 - 13.847: 94.8729% ( 3) 00:10:42.406 13.847 - 13.905: 94.9049% ( 3) 00:10:42.406 13.905 - 13.964: 94.9370% ( 3) 00:10:42.406 13.964 - 14.022: 94.9797% ( 4) 00:10:42.406 14.022 - 14.080: 95.0011% ( 2) 00:10:42.406 14.080 - 14.138: 95.0224% ( 2) 00:10:42.406 14.138 - 14.196: 95.0331% ( 1) 00:10:42.406 14.196 - 14.255: 95.0438% ( 1) 00:10:42.406 14.255 - 14.313: 95.0972% ( 5) 00:10:42.406 14.313 - 14.371: 95.1827% ( 8) 00:10:42.406 14.371 - 14.429: 95.2361% ( 5) 00:10:42.406 14.429 - 14.487: 95.2574% ( 2) 00:10:42.406 14.487 - 14.545: 95.2895% ( 3) 00:10:42.406 14.545 - 14.604: 95.3642% ( 7) 00:10:42.406 14.604 - 14.662: 95.4390% ( 7) 00:10:42.406 14.662 - 14.720: 95.4711% ( 3) 00:10:42.406 14.720 - 14.778: 95.5565% ( 8) 00:10:42.406 14.778 - 14.836: 95.6313% ( 7) 00:10:42.406 14.836 - 14.895: 95.7701% ( 13) 00:10:42.406 14.895 - 15.011: 95.9517% ( 17) 00:10:42.406 15.011 - 15.127: 96.0692% ( 11) 00:10:42.406 15.127 - 15.244: 96.1226% ( 5) 00:10:42.406 15.244 - 15.360: 96.2508% ( 12) 00:10:42.406 15.360 - 15.476: 96.3469% ( 9) 00:10:42.406 15.476 - 15.593: 96.5606% ( 20) 00:10:42.406 15.593 - 15.709: 96.6674% ( 10) 00:10:42.406 15.709 - 15.825: 96.8169% ( 14) 00:10:42.406 15.825 - 15.942: 96.9451% ( 12) 00:10:42.406 15.942 - 16.058: 97.0092% ( 6) 00:10:42.406 16.058 - 16.175: 97.0946% ( 8) 00:10:42.406 16.175 - 16.291: 97.1694% ( 7) 00:10:42.406 16.291 - 16.407: 97.2869% ( 11) 00:10:42.406 16.407 - 16.524: 97.3830% ( 9) 00:10:42.406 16.524 - 16.640: 97.4792% ( 9) 00:10:42.406 16.640 - 16.756: 97.5967% ( 11) 00:10:42.406 16.756 - 16.873: 97.7676% ( 16) 00:10:42.406 16.873 - 16.989: 97.8317% ( 6) 00:10:42.406 16.989 - 17.105: 97.8851% ( 5) 00:10:42.406 17.105 - 17.222: 98.0026% ( 11) 00:10:42.406 17.222 - 17.338: 98.0560% ( 5) 00:10:42.406 17.338 - 17.455: 98.1094% ( 5) 00:10:42.406 17.455 - 17.571: 98.1948% ( 8) 00:10:42.406 17.571 - 17.687: 98.2376% ( 4) 00:10:42.406 17.687 - 17.804: 98.2803% ( 4) 00:10:42.406 17.804 - 17.920: 98.3016% ( 2) 00:10:42.406 17.920 - 18.036: 98.3551% ( 5) 00:10:42.406 18.036 - 18.153: 98.3978% ( 4) 00:10:42.406 18.153 - 18.269: 98.4298% ( 3) 00:10:42.406 18.269 - 18.385: 98.4832% ( 5) 00:10:42.406 18.385 - 18.502: 98.5580% ( 7) 00:10:42.406 18.502 - 18.618: 98.5794% ( 2) 00:10:42.406 18.618 - 18.735: 98.6114% ( 3) 00:10:42.406 18.735 - 18.851: 98.6755% ( 6) 00:10:42.406 18.851 - 18.967: 98.7075% ( 3) 00:10:42.406 18.967 - 19.084: 98.7289% ( 2) 00:10:42.406 19.084 - 19.200: 98.7716% ( 4) 00:10:42.406 19.316 - 19.433: 98.8144% ( 4) 00:10:42.406 19.433 - 19.549: 98.8784% ( 6) 00:10:42.406 19.549 - 19.665: 98.8998% ( 2) 00:10:42.406 19.665 - 19.782: 98.9425% ( 4) 00:10:42.406 19.782 - 19.898: 98.9639% ( 2) 00:10:42.406 19.898 - 20.015: 98.9959% ( 3) 00:10:42.406 20.015 - 20.131: 99.0173% ( 2) 00:10:42.406 20.131 - 20.247: 99.0280% ( 1) 00:10:42.406 20.247 - 20.364: 99.0707% ( 4) 00:10:42.406 20.364 - 20.480: 99.1455% ( 7) 00:10:42.406 20.480 - 20.596: 99.1668% ( 2) 00:10:42.406 20.596 - 20.713: 99.2203% ( 5) 00:10:42.406 20.713 - 20.829: 99.2416% ( 2) 00:10:42.406 20.829 - 20.945: 99.2523% ( 1) 00:10:42.406 20.945 - 21.062: 99.2737% ( 2) 00:10:42.406 21.062 - 21.178: 99.3057% ( 3) 00:10:42.406 21.178 - 21.295: 99.3271% ( 2) 00:10:42.406 21.295 - 21.411: 99.3377% ( 1) 00:10:42.406 21.411 - 21.527: 99.3698% ( 3) 00:10:42.406 21.527 - 21.644: 99.3805% ( 1) 00:10:42.406 21.644 - 21.760: 99.3912% ( 1) 00:10:42.406 21.760 - 21.876: 99.4018% ( 1) 00:10:42.406 21.876 - 21.993: 99.4125% ( 1) 00:10:42.406 21.993 - 22.109: 99.4232% ( 1) 00:10:42.406 22.109 - 22.225: 99.4339% ( 1) 00:10:42.406 22.225 - 22.342: 99.4552% ( 2) 00:10:42.406 22.342 - 22.458: 99.4659% ( 1) 00:10:42.406 22.458 - 22.575: 99.4980% ( 3) 00:10:42.406 22.575 - 22.691: 99.5193% ( 2) 00:10:42.406 22.691 - 22.807: 99.5300% ( 1) 00:10:42.406 22.807 - 22.924: 99.5407% ( 1) 00:10:42.406 22.924 - 23.040: 99.5621% ( 2) 00:10:42.406 23.156 - 23.273: 99.6155% ( 5) 00:10:42.406 23.389 - 23.505: 99.6261% ( 1) 00:10:42.406 23.505 - 23.622: 99.6368% ( 1) 00:10:42.406 23.622 - 23.738: 99.6475% ( 1) 00:10:42.406 23.738 - 23.855: 99.6582% ( 1) 00:10:42.406 24.204 - 24.320: 99.6689% ( 1) 00:10:42.407 24.436 - 24.553: 99.6796% ( 1) 00:10:42.407 24.785 - 24.902: 99.6902% ( 1) 00:10:42.407 25.018 - 25.135: 99.7009% ( 1) 00:10:42.407 25.135 - 25.251: 99.7116% ( 1) 00:10:42.407 25.251 - 25.367: 99.7223% ( 1) 00:10:42.407 25.716 - 25.833: 99.7330% ( 1) 00:10:42.407 25.833 - 25.949: 99.7436% ( 1) 00:10:42.407 26.182 - 26.298: 99.7650% ( 2) 00:10:42.407 26.298 - 26.415: 99.7757% ( 1) 00:10:42.407 27.229 - 27.345: 99.7864% ( 1) 00:10:42.407 27.462 - 27.578: 99.7971% ( 1) 00:10:42.407 27.695 - 27.811: 99.8077% ( 1) 00:10:42.407 29.091 - 29.207: 99.8184% ( 1) 00:10:42.407 31.651 - 31.884: 99.8291% ( 1) 00:10:42.407 33.513 - 33.745: 99.8398% ( 1) 00:10:42.407 35.607 - 35.840: 99.8505% ( 1) 00:10:42.407 36.073 - 36.305: 99.8718% ( 2) 00:10:42.407 36.538 - 36.771: 99.8825% ( 1) 00:10:42.407 37.469 - 37.702: 99.8932% ( 1) 00:10:42.407 38.167 - 38.400: 99.9039% ( 1) 00:10:42.407 42.124 - 42.356: 99.9145% ( 1) 00:10:42.407 46.313 - 46.545: 99.9252% ( 1) 00:10:42.407 47.011 - 47.244: 99.9359% ( 1) 00:10:42.407 49.338 - 49.571: 99.9466% ( 1) 00:10:42.407 53.993 - 54.225: 99.9573% ( 1) 00:10:42.407 57.716 - 57.949: 99.9680% ( 1) 00:10:42.407 59.578 - 60.044: 99.9786% ( 1) 00:10:42.407 62.371 - 62.836: 99.9893% ( 1) 00:10:42.407 74.007 - 74.473: 100.0000% ( 1) 00:10:42.407 00:10:42.407 ************************************ 00:10:42.407 END TEST nvme_overhead 00:10:42.407 ************************************ 00:10:42.407 00:10:42.407 real 0m1.381s 00:10:42.407 user 0m1.133s 00:10:42.407 sys 0m0.192s 00:10:42.407 18:12:16 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.407 18:12:16 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:10:42.407 18:12:16 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:42.407 18:12:16 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:42.407 18:12:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.407 18:12:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:42.407 ************************************ 00:10:42.407 START TEST nvme_arbitration 00:10:42.407 ************************************ 00:10:42.407 18:12:16 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:46.608 Initializing NVMe Controllers 00:10:46.608 Attached to 0000:00:10.0 00:10:46.608 Attached to 0000:00:11.0 00:10:46.608 Attached to 0000:00:13.0 00:10:46.608 Attached to 0000:00:12.0 00:10:46.608 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:46.608 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:46.608 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:46.608 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:46.608 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:46.608 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:46.608 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:46.608 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:46.608 Initialization complete. Launching workers. 00:10:46.608 Starting thread on core 1 with urgent priority queue 00:10:46.608 Starting thread on core 2 with urgent priority queue 00:10:46.608 Starting thread on core 3 with urgent priority queue 00:10:46.608 Starting thread on core 0 with urgent priority queue 00:10:46.608 QEMU NVMe Ctrl (12340 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:10:46.608 QEMU NVMe Ctrl (12342 ) core 0: 533.33 IO/s 187.50 secs/100000 ios 00:10:46.608 QEMU NVMe Ctrl (12341 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:10:46.608 QEMU NVMe Ctrl (12342 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:10:46.608 QEMU NVMe Ctrl (12343 ) core 2: 682.67 IO/s 146.48 secs/100000 ios 00:10:46.608 QEMU NVMe Ctrl (12342 ) core 3: 682.67 IO/s 146.48 secs/100000 ios 00:10:46.608 ======================================================== 00:10:46.608 00:10:46.608 00:10:46.608 real 0m3.435s 00:10:46.608 user 0m9.373s 00:10:46.608 sys 0m0.172s 00:10:46.608 ************************************ 00:10:46.608 END TEST nvme_arbitration 00:10:46.608 ************************************ 00:10:46.608 18:12:20 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.608 18:12:20 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:46.608 18:12:20 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:46.608 18:12:20 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:46.608 18:12:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.608 18:12:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:46.608 ************************************ 00:10:46.608 START TEST nvme_single_aen 00:10:46.608 ************************************ 00:10:46.608 18:12:20 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:46.608 Asynchronous Event Request test 00:10:46.608 Attached to 0000:00:10.0 00:10:46.608 Attached to 0000:00:11.0 00:10:46.608 Attached to 0000:00:13.0 00:10:46.608 Attached to 0000:00:12.0 00:10:46.608 Reset controller to setup AER completions for this process 00:10:46.608 Registering asynchronous event callbacks... 00:10:46.608 Getting orig temperature thresholds of all controllers 00:10:46.608 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:46.608 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:46.608 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:46.608 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:46.608 Setting all controllers temperature threshold low to trigger AER 00:10:46.608 Waiting for all controllers temperature threshold to be set lower 00:10:46.608 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:46.608 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:46.608 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:46.608 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:46.608 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:46.608 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:46.608 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:46.608 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:46.608 Waiting for all controllers to trigger AER and reset threshold 00:10:46.608 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:46.608 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:46.608 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:46.608 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:46.608 Cleaning up... 00:10:46.608 00:10:46.608 real 0m0.316s 00:10:46.608 user 0m0.116s 00:10:46.608 sys 0m0.153s 00:10:46.608 18:12:20 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.608 ************************************ 00:10:46.608 END TEST nvme_single_aen 00:10:46.608 ************************************ 00:10:46.608 18:12:20 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:46.608 18:12:20 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:46.608 18:12:20 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:46.608 18:12:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.608 18:12:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:46.608 ************************************ 00:10:46.608 START TEST nvme_doorbell_aers 00:10:46.608 ************************************ 00:10:46.608 18:12:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:10:46.608 18:12:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:46.608 18:12:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:46.608 18:12:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:46.608 18:12:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:46.608 18:12:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:46.608 18:12:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:10:46.608 18:12:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:46.608 18:12:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:46.608 18:12:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:46.608 18:12:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:46.608 18:12:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:46.608 18:12:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:46.608 18:12:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:46.865 [2024-11-26 18:12:21.092813] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64708) is not found. Dropping the request. 00:10:56.831 Executing: test_write_invalid_db 00:10:56.831 Waiting for AER completion... 00:10:56.831 Failure: test_write_invalid_db 00:10:56.831 00:10:56.831 Executing: test_invalid_db_write_overflow_sq 00:10:56.831 Waiting for AER completion... 00:10:56.831 Failure: test_invalid_db_write_overflow_sq 00:10:56.831 00:10:56.831 Executing: test_invalid_db_write_overflow_cq 00:10:56.831 Waiting for AER completion... 00:10:56.831 Failure: test_invalid_db_write_overflow_cq 00:10:56.831 00:10:56.831 18:12:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:56.831 18:12:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:56.831 [2024-11-26 18:12:31.229013] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64708) is not found. Dropping the request. 00:11:06.794 Executing: test_write_invalid_db 00:11:06.794 Waiting for AER completion... 00:11:06.794 Failure: test_write_invalid_db 00:11:06.794 00:11:06.794 Executing: test_invalid_db_write_overflow_sq 00:11:06.794 Waiting for AER completion... 00:11:06.794 Failure: test_invalid_db_write_overflow_sq 00:11:06.794 00:11:06.794 Executing: test_invalid_db_write_overflow_cq 00:11:06.794 Waiting for AER completion... 00:11:06.794 Failure: test_invalid_db_write_overflow_cq 00:11:06.794 00:11:06.795 18:12:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:06.795 18:12:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:06.795 [2024-11-26 18:12:41.252035] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64708) is not found. Dropping the request. 00:11:16.762 Executing: test_write_invalid_db 00:11:16.762 Waiting for AER completion... 00:11:16.762 Failure: test_write_invalid_db 00:11:16.762 00:11:16.762 Executing: test_invalid_db_write_overflow_sq 00:11:16.762 Waiting for AER completion... 00:11:16.762 Failure: test_invalid_db_write_overflow_sq 00:11:16.762 00:11:16.762 Executing: test_invalid_db_write_overflow_cq 00:11:16.762 Waiting for AER completion... 00:11:16.762 Failure: test_invalid_db_write_overflow_cq 00:11:16.762 00:11:16.762 18:12:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:16.762 18:12:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:17.017 [2024-11-26 18:12:51.280468] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64708) is not found. Dropping the request. 00:11:26.975 Executing: test_write_invalid_db 00:11:26.975 Waiting for AER completion... 00:11:26.975 Failure: test_write_invalid_db 00:11:26.975 00:11:26.975 Executing: test_invalid_db_write_overflow_sq 00:11:26.975 Waiting for AER completion... 00:11:26.975 Failure: test_invalid_db_write_overflow_sq 00:11:26.975 00:11:26.975 Executing: test_invalid_db_write_overflow_cq 00:11:26.975 Waiting for AER completion... 00:11:26.975 Failure: test_invalid_db_write_overflow_cq 00:11:26.975 00:11:26.975 ************************************ 00:11:26.975 END TEST nvme_doorbell_aers 00:11:26.975 ************************************ 00:11:26.975 00:11:26.975 real 0m40.272s 00:11:26.975 user 0m34.172s 00:11:26.975 sys 0m5.656s 00:11:26.975 18:13:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.975 18:13:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:11:26.975 18:13:01 nvme -- nvme/nvme.sh@97 -- # uname 00:11:26.975 18:13:01 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:26.975 18:13:01 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:26.975 18:13:01 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:26.975 18:13:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.975 18:13:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:26.975 ************************************ 00:11:26.975 START TEST nvme_multi_aen 00:11:26.975 ************************************ 00:11:26.975 18:13:01 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:26.975 [2024-11-26 18:13:01.339988] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64708) is not found. Dropping the request. 00:11:26.975 [2024-11-26 18:13:01.340121] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64708) is not found. Dropping the request. 00:11:26.975 [2024-11-26 18:13:01.340145] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64708) is not found. Dropping the request. 00:11:26.975 [2024-11-26 18:13:01.342051] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64708) is not found. Dropping the request. 00:11:26.975 [2024-11-26 18:13:01.342261] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64708) is not found. Dropping the request. 00:11:26.975 [2024-11-26 18:13:01.342288] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64708) is not found. Dropping the request. 00:11:26.975 [2024-11-26 18:13:01.343673] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64708) is not found. Dropping the request. 00:11:26.975 [2024-11-26 18:13:01.343715] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64708) is not found. Dropping the request. 00:11:26.975 [2024-11-26 18:13:01.343736] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64708) is not found. Dropping the request. 00:11:26.975 [2024-11-26 18:13:01.345224] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64708) is not found. Dropping the request. 00:11:26.975 [2024-11-26 18:13:01.345410] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64708) is not found. Dropping the request. 00:11:26.975 [2024-11-26 18:13:01.345435] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64708) is not found. Dropping the request. 00:11:26.975 Child process pid: 65227 00:11:27.233 [Child] Asynchronous Event Request test 00:11:27.233 [Child] Attached to 0000:00:10.0 00:11:27.233 [Child] Attached to 0000:00:11.0 00:11:27.233 [Child] Attached to 0000:00:13.0 00:11:27.233 [Child] Attached to 0000:00:12.0 00:11:27.233 [Child] Registering asynchronous event callbacks... 00:11:27.233 [Child] Getting orig temperature thresholds of all controllers 00:11:27.233 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:27.233 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:27.233 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:27.233 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:27.233 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:27.233 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:27.233 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:27.233 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:27.233 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:27.233 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:27.233 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:27.233 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:27.233 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:27.233 [Child] Cleaning up... 00:11:27.491 Asynchronous Event Request test 00:11:27.491 Attached to 0000:00:10.0 00:11:27.491 Attached to 0000:00:11.0 00:11:27.491 Attached to 0000:00:13.0 00:11:27.491 Attached to 0000:00:12.0 00:11:27.491 Reset controller to setup AER completions for this process 00:11:27.491 Registering asynchronous event callbacks... 00:11:27.491 Getting orig temperature thresholds of all controllers 00:11:27.491 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:27.491 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:27.491 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:27.491 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:27.491 Setting all controllers temperature threshold low to trigger AER 00:11:27.491 Waiting for all controllers temperature threshold to be set lower 00:11:27.491 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:27.491 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:27.491 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:27.491 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:27.491 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:27.491 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:27.491 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:27.491 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:27.492 Waiting for all controllers to trigger AER and reset threshold 00:11:27.492 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:27.492 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:27.492 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:27.492 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:27.492 Cleaning up... 00:11:27.492 00:11:27.492 real 0m0.724s 00:11:27.492 user 0m0.254s 00:11:27.492 sys 0m0.354s 00:11:27.492 18:13:01 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.492 ************************************ 00:11:27.492 END TEST nvme_multi_aen 00:11:27.492 ************************************ 00:11:27.492 18:13:01 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:11:27.492 18:13:01 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:27.492 18:13:01 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:27.492 18:13:01 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.492 18:13:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:27.492 ************************************ 00:11:27.492 START TEST nvme_startup 00:11:27.492 ************************************ 00:11:27.492 18:13:01 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:27.749 Initializing NVMe Controllers 00:11:27.749 Attached to 0000:00:10.0 00:11:27.749 Attached to 0000:00:11.0 00:11:27.749 Attached to 0000:00:13.0 00:11:27.749 Attached to 0000:00:12.0 00:11:27.749 Initialization complete. 00:11:27.749 Time used:266717.406 (us). 00:11:27.749 00:11:27.749 real 0m0.381s 00:11:27.749 user 0m0.152s 00:11:27.749 sys 0m0.179s 00:11:27.749 18:13:02 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.749 ************************************ 00:11:27.749 END TEST nvme_startup 00:11:27.749 ************************************ 00:11:27.749 18:13:02 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:11:28.006 18:13:02 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:28.006 18:13:02 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:28.006 18:13:02 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.006 18:13:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:28.006 ************************************ 00:11:28.006 START TEST nvme_multi_secondary 00:11:28.006 ************************************ 00:11:28.006 18:13:02 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:11:28.006 18:13:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65283 00:11:28.006 18:13:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:28.006 18:13:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65284 00:11:28.006 18:13:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:28.006 18:13:02 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:31.358 Initializing NVMe Controllers 00:11:31.358 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:31.358 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:31.358 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:31.358 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:31.358 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:31.358 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:31.358 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:31.358 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:31.358 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:31.358 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:31.358 Initialization complete. Launching workers. 00:11:31.358 ======================================================== 00:11:31.358 Latency(us) 00:11:31.358 Device Information : IOPS MiB/s Average min max 00:11:31.358 PCIE (0000:00:10.0) NSID 1 from core 2: 2442.35 9.54 6545.60 1014.30 14465.90 00:11:31.358 PCIE (0000:00:11.0) NSID 1 from core 2: 2442.35 9.54 6544.44 1041.13 13712.23 00:11:31.358 PCIE (0000:00:13.0) NSID 1 from core 2: 2442.35 9.54 6544.43 1044.43 14212.00 00:11:31.358 PCIE (0000:00:12.0) NSID 1 from core 2: 2442.35 9.54 6544.59 1041.04 13996.61 00:11:31.358 PCIE (0000:00:12.0) NSID 2 from core 2: 2442.35 9.54 6544.76 1051.22 13874.04 00:11:31.358 PCIE (0000:00:12.0) NSID 3 from core 2: 2447.69 9.56 6530.41 1027.18 13749.76 00:11:31.358 ======================================================== 00:11:31.358 Total : 14659.45 57.26 6542.37 1014.30 14465.90 00:11:31.358 00:11:31.358 18:13:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65283 00:11:31.358 Initializing NVMe Controllers 00:11:31.358 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:31.358 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:31.358 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:31.358 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:31.358 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:31.358 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:31.358 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:31.358 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:31.358 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:31.358 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:31.358 Initialization complete. Launching workers. 00:11:31.358 ======================================================== 00:11:31.358 Latency(us) 00:11:31.358 Device Information : IOPS MiB/s Average min max 00:11:31.358 PCIE (0000:00:10.0) NSID 1 from core 1: 5292.24 20.67 3021.26 1037.85 6811.10 00:11:31.358 PCIE (0000:00:11.0) NSID 1 from core 1: 5292.24 20.67 3022.70 1068.80 7111.30 00:11:31.358 PCIE (0000:00:13.0) NSID 1 from core 1: 5292.24 20.67 3022.78 1054.35 6809.08 00:11:31.358 PCIE (0000:00:12.0) NSID 1 from core 1: 5292.24 20.67 3022.71 1055.90 6941.66 00:11:31.358 PCIE (0000:00:12.0) NSID 2 from core 1: 5292.24 20.67 3022.70 1064.25 6475.05 00:11:31.358 PCIE (0000:00:12.0) NSID 3 from core 1: 5292.24 20.67 3022.81 1051.94 7000.05 00:11:31.358 ======================================================== 00:11:31.358 Total : 31753.46 124.04 3022.49 1037.85 7111.30 00:11:31.358 00:11:33.890 Initializing NVMe Controllers 00:11:33.890 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:33.890 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:33.890 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:33.890 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:33.890 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:33.890 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:33.890 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:33.890 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:33.890 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:33.890 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:33.890 Initialization complete. Launching workers. 00:11:33.890 ======================================================== 00:11:33.890 Latency(us) 00:11:33.890 Device Information : IOPS MiB/s Average min max 00:11:33.890 PCIE (0000:00:10.0) NSID 1 from core 0: 7841.30 30.63 2038.70 965.77 14788.65 00:11:33.890 PCIE (0000:00:11.0) NSID 1 from core 0: 7841.30 30.63 2039.95 994.41 14556.24 00:11:33.890 PCIE (0000:00:13.0) NSID 1 from core 0: 7841.30 30.63 2039.89 990.62 14106.63 00:11:33.890 PCIE (0000:00:12.0) NSID 1 from core 0: 7841.30 30.63 2039.83 1007.49 14664.56 00:11:33.890 PCIE (0000:00:12.0) NSID 2 from core 0: 7841.30 30.63 2039.77 983.35 14577.13 00:11:33.890 PCIE (0000:00:12.0) NSID 3 from core 0: 7841.30 30.63 2039.71 930.57 14576.86 00:11:33.890 ======================================================== 00:11:33.890 Total : 47047.80 183.78 2039.64 930.57 14788.65 00:11:33.890 00:11:33.890 18:13:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65284 00:11:33.890 18:13:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65363 00:11:33.890 18:13:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:33.890 18:13:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:33.890 18:13:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65364 00:11:33.890 18:13:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:37.179 Initializing NVMe Controllers 00:11:37.179 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:37.179 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:37.179 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:37.179 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:37.179 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:37.179 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:37.179 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:37.179 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:37.179 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:37.179 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:37.179 Initialization complete. Launching workers. 00:11:37.179 ======================================================== 00:11:37.179 Latency(us) 00:11:37.179 Device Information : IOPS MiB/s Average min max 00:11:37.179 PCIE (0000:00:10.0) NSID 1 from core 1: 5072.26 19.81 3152.32 1146.49 6879.83 00:11:37.179 PCIE (0000:00:11.0) NSID 1 from core 1: 5072.26 19.81 3154.00 1168.52 7514.02 00:11:37.179 PCIE (0000:00:13.0) NSID 1 from core 1: 5072.26 19.81 3154.39 1168.28 7567.31 00:11:37.179 PCIE (0000:00:12.0) NSID 1 from core 1: 5072.26 19.81 3154.34 1173.46 7285.41 00:11:37.179 PCIE (0000:00:12.0) NSID 2 from core 1: 5072.26 19.81 3154.44 1159.10 7495.12 00:11:37.179 PCIE (0000:00:12.0) NSID 3 from core 1: 5072.26 19.81 3154.29 1164.35 6625.67 00:11:37.179 ======================================================== 00:11:37.179 Total : 30433.56 118.88 3153.96 1146.49 7567.31 00:11:37.179 00:11:37.179 Initializing NVMe Controllers 00:11:37.179 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:37.179 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:37.179 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:37.179 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:37.179 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:37.179 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:37.179 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:37.179 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:37.179 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:37.179 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:37.179 Initialization complete. Launching workers. 00:11:37.179 ======================================================== 00:11:37.179 Latency(us) 00:11:37.179 Device Information : IOPS MiB/s Average min max 00:11:37.179 PCIE (0000:00:10.0) NSID 1 from core 0: 5053.97 19.74 3163.62 1152.47 7355.22 00:11:37.179 PCIE (0000:00:11.0) NSID 1 from core 0: 5053.97 19.74 3165.13 1170.13 7121.92 00:11:37.179 PCIE (0000:00:13.0) NSID 1 from core 0: 5053.97 19.74 3164.96 1182.99 7489.40 00:11:37.179 PCIE (0000:00:12.0) NSID 1 from core 0: 5053.97 19.74 3164.74 1177.92 8106.33 00:11:37.179 PCIE (0000:00:12.0) NSID 2 from core 0: 5053.97 19.74 3164.51 1177.05 7964.71 00:11:37.179 PCIE (0000:00:12.0) NSID 3 from core 0: 5053.97 19.74 3164.31 1192.69 7735.46 00:11:37.179 ======================================================== 00:11:37.179 Total : 30323.83 118.45 3164.54 1152.47 8106.33 00:11:37.179 00:11:39.078 Initializing NVMe Controllers 00:11:39.078 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:39.078 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:39.078 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:39.078 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:39.078 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:39.078 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:39.078 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:39.078 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:39.078 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:39.078 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:39.078 Initialization complete. Launching workers. 00:11:39.078 ======================================================== 00:11:39.078 Latency(us) 00:11:39.078 Device Information : IOPS MiB/s Average min max 00:11:39.078 PCIE (0000:00:10.0) NSID 1 from core 2: 3330.57 13.01 4801.54 1019.01 16072.65 00:11:39.078 PCIE (0000:00:11.0) NSID 1 from core 2: 3330.57 13.01 4803.01 1032.98 19216.73 00:11:39.078 PCIE (0000:00:13.0) NSID 1 from core 2: 3330.57 13.01 4802.92 1019.82 18785.69 00:11:39.078 PCIE (0000:00:12.0) NSID 1 from core 2: 3330.57 13.01 4802.57 1036.10 19010.27 00:11:39.078 PCIE (0000:00:12.0) NSID 2 from core 2: 3330.57 13.01 4803.20 1018.53 18995.62 00:11:39.078 PCIE (0000:00:12.0) NSID 3 from core 2: 3333.77 13.02 4798.54 934.62 15101.65 00:11:39.078 ======================================================== 00:11:39.078 Total : 19986.61 78.07 4801.96 934.62 19216.73 00:11:39.078 00:11:39.078 ************************************ 00:11:39.078 END TEST nvme_multi_secondary 00:11:39.078 ************************************ 00:11:39.078 18:13:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65363 00:11:39.078 18:13:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65364 00:11:39.078 00:11:39.078 real 0m11.294s 00:11:39.078 user 0m18.673s 00:11:39.078 sys 0m1.089s 00:11:39.078 18:13:13 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.078 18:13:13 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:39.335 18:13:13 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:39.335 18:13:13 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:39.335 18:13:13 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64282 ]] 00:11:39.335 18:13:13 nvme -- common/autotest_common.sh@1094 -- # kill 64282 00:11:39.335 18:13:13 nvme -- common/autotest_common.sh@1095 -- # wait 64282 00:11:39.335 [2024-11-26 18:13:13.579709] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65226) is not found. Dropping the request. 00:11:39.335 [2024-11-26 18:13:13.579950] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65226) is not found. Dropping the request. 00:11:39.335 [2024-11-26 18:13:13.580004] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65226) is not found. Dropping the request. 00:11:39.335 [2024-11-26 18:13:13.580029] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65226) is not found. Dropping the request. 00:11:39.335 [2024-11-26 18:13:13.582660] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65226) is not found. Dropping the request. 00:11:39.335 [2024-11-26 18:13:13.582896] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65226) is not found. Dropping the request. 00:11:39.335 [2024-11-26 18:13:13.583083] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65226) is not found. Dropping the request. 00:11:39.335 [2024-11-26 18:13:13.583294] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65226) is not found. Dropping the request. 00:11:39.335 [2024-11-26 18:13:13.586048] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65226) is not found. Dropping the request. 00:11:39.335 [2024-11-26 18:13:13.586292] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65226) is not found. Dropping the request. 00:11:39.335 [2024-11-26 18:13:13.586543] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65226) is not found. Dropping the request. 00:11:39.335 [2024-11-26 18:13:13.586798] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65226) is not found. Dropping the request. 00:11:39.335 [2024-11-26 18:13:13.589443] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65226) is not found. Dropping the request. 00:11:39.335 [2024-11-26 18:13:13.589721] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65226) is not found. Dropping the request. 00:11:39.335 [2024-11-26 18:13:13.589900] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65226) is not found. Dropping the request. 00:11:39.335 [2024-11-26 18:13:13.589938] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65226) is not found. Dropping the request. 00:11:39.593 18:13:13 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:11:39.593 18:13:13 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:11:39.593 18:13:13 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:39.593 18:13:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:39.593 18:13:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.593 18:13:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:39.593 ************************************ 00:11:39.593 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:39.593 ************************************ 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:39.593 * Looking for test storage... 00:11:39.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.593 18:13:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:39.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.593 --rc genhtml_branch_coverage=1 00:11:39.593 --rc genhtml_function_coverage=1 00:11:39.593 --rc genhtml_legend=1 00:11:39.593 --rc geninfo_all_blocks=1 00:11:39.593 --rc geninfo_unexecuted_blocks=1 00:11:39.593 00:11:39.593 ' 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:39.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.593 --rc genhtml_branch_coverage=1 00:11:39.593 --rc genhtml_function_coverage=1 00:11:39.593 --rc genhtml_legend=1 00:11:39.593 --rc geninfo_all_blocks=1 00:11:39.593 --rc geninfo_unexecuted_blocks=1 00:11:39.593 00:11:39.593 ' 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:39.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.593 --rc genhtml_branch_coverage=1 00:11:39.593 --rc genhtml_function_coverage=1 00:11:39.593 --rc genhtml_legend=1 00:11:39.593 --rc geninfo_all_blocks=1 00:11:39.593 --rc geninfo_unexecuted_blocks=1 00:11:39.593 00:11:39.593 ' 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:39.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.593 --rc genhtml_branch_coverage=1 00:11:39.593 --rc genhtml_function_coverage=1 00:11:39.593 --rc genhtml_legend=1 00:11:39.593 --rc geninfo_all_blocks=1 00:11:39.593 --rc geninfo_unexecuted_blocks=1 00:11:39.593 00:11:39.593 ' 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:39.593 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:39.851 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:39.851 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:39.851 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:11:39.851 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:39.851 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:39.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.851 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65525 00:11:39.851 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:39.851 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65525 00:11:39.851 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:39.851 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65525 ']' 00:11:39.851 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.851 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.851 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.851 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.851 18:13:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:39.851 [2024-11-26 18:13:14.244670] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:11:39.851 [2024-11-26 18:13:14.245113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65525 ] 00:11:40.109 [2024-11-26 18:13:14.459948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.367 [2024-11-26 18:13:14.643294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.367 [2024-11-26 18:13:14.643459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.367 [2024-11-26 18:13:14.643572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.367 [2024-11-26 18:13:14.643594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:41.301 nvme0n1 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_jXAT6.txt 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:41.301 true 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732644795 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65554 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:41.301 18:13:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:43.828 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:43.828 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.828 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:43.828 [2024-11-26 18:13:17.732319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:43.828 [2024-11-26 18:13:17.732790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:43.828 [2024-11-26 18:13:17.732832] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:43.828 [2024-11-26 18:13:17.732852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.828 [2024-11-26 18:13:17.734908] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:43.828 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65554 00:11:43.828 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.828 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65554 00:11:43.828 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65554 00:11:43.828 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:43.828 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:43.828 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:43.828 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.828 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:43.828 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.828 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:43.828 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_jXAT6.txt 00:11:43.828 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_jXAT6.txt 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65525 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65525 ']' 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65525 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65525 00:11:43.829 killing process with pid 65525 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65525' 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65525 00:11:43.829 18:13:17 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65525 00:11:46.359 18:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:46.359 18:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:46.359 00:11:46.359 real 0m6.415s 00:11:46.359 user 0m22.322s 00:11:46.359 sys 0m0.828s 00:11:46.359 18:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.359 18:13:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:46.359 ************************************ 00:11:46.359 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:46.359 ************************************ 00:11:46.359 18:13:20 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:46.359 18:13:20 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:46.359 18:13:20 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:46.359 18:13:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.359 18:13:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:46.359 ************************************ 00:11:46.359 START TEST nvme_fio 00:11:46.359 ************************************ 00:11:46.359 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:11:46.359 18:13:20 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:46.359 18:13:20 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:46.359 18:13:20 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:46.359 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:46.359 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:11:46.359 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:46.360 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:46.360 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:46.360 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:46.360 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:46.360 18:13:20 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:46.360 18:13:20 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:46.360 18:13:20 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:46.360 18:13:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:46.360 18:13:20 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:46.360 18:13:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:46.360 18:13:20 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:46.618 18:13:20 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:46.618 18:13:20 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:46.618 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:46.618 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:46.618 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:46.618 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:46.618 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:46.618 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:46.618 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:46.618 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:46.618 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:46.618 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:46.618 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:46.618 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:46.618 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:46.618 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:46.618 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:46.618 18:13:20 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:46.877 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:46.877 fio-3.35 00:11:46.877 Starting 1 thread 00:11:50.166 00:11:50.166 test: (groupid=0, jobs=1): err= 0: pid=65709: Tue Nov 26 18:13:24 2024 00:11:50.166 read: IOPS=16.6k, BW=65.0MiB/s (68.1MB/s)(130MiB/2001msec) 00:11:50.166 slat (usec): min=4, max=126, avg= 6.38, stdev= 2.15 00:11:50.166 clat (usec): min=240, max=8641, avg=3825.10, stdev=521.96 00:11:50.166 lat (usec): min=246, max=8648, avg=3831.48, stdev=522.74 00:11:50.166 clat percentiles (usec): 00:11:50.166 | 1.00th=[ 3130], 5.00th=[ 3294], 10.00th=[ 3392], 20.00th=[ 3490], 00:11:50.166 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3752], 00:11:50.166 | 70.00th=[ 3851], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4555], 00:11:50.166 | 99.00th=[ 5932], 99.50th=[ 6587], 99.90th=[ 8029], 99.95th=[ 8291], 00:11:50.166 | 99.99th=[ 8586] 00:11:50.166 bw ( KiB/s): min=58914, max=70072, per=98.10%, avg=65256.67, stdev=5733.66, samples=3 00:11:50.166 iops : min=14728, max=17518, avg=16314.00, stdev=1433.69, samples=3 00:11:50.166 write: IOPS=16.7k, BW=65.1MiB/s (68.3MB/s)(130MiB/2001msec); 0 zone resets 00:11:50.166 slat (nsec): min=4734, max=43871, avg=6578.97, stdev=2111.18 00:11:50.166 clat (usec): min=350, max=8728, avg=3829.70, stdev=516.54 00:11:50.166 lat (usec): min=355, max=8735, avg=3836.28, stdev=517.28 00:11:50.166 clat percentiles (usec): 00:11:50.166 | 1.00th=[ 3130], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3490], 00:11:50.166 | 30.00th=[ 3589], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3785], 00:11:50.166 | 70.00th=[ 3884], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4555], 00:11:50.166 | 99.00th=[ 5866], 99.50th=[ 6521], 99.90th=[ 8029], 99.95th=[ 8225], 00:11:50.166 | 99.99th=[ 8356] 00:11:50.166 bw ( KiB/s): min=59273, max=69368, per=97.52%, avg=65019.00, stdev=5190.47, samples=3 00:11:50.166 iops : min=14818, max=17342, avg=16254.67, stdev=1297.76, samples=3 00:11:50.166 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.01% 00:11:50.166 lat (msec) : 2=0.05%, 4=75.92%, 10=23.98% 00:11:50.166 cpu : usr=99.05%, sys=0.05%, ctx=5, majf=0, minf=606 00:11:50.166 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:50.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.166 issued rwts: total=33276,33353,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.166 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.166 00:11:50.166 Run status group 0 (all jobs): 00:11:50.166 READ: bw=65.0MiB/s (68.1MB/s), 65.0MiB/s-65.0MiB/s (68.1MB/s-68.1MB/s), io=130MiB (136MB), run=2001-2001msec 00:11:50.166 WRITE: bw=65.1MiB/s (68.3MB/s), 65.1MiB/s-65.1MiB/s (68.3MB/s-68.3MB/s), io=130MiB (137MB), run=2001-2001msec 00:11:50.166 ----------------------------------------------------- 00:11:50.166 Suppressions used: 00:11:50.166 count bytes template 00:11:50.166 1 32 /usr/src/fio/parse.c 00:11:50.166 1 8 libtcmalloc_minimal.so 00:11:50.166 ----------------------------------------------------- 00:11:50.166 00:11:50.166 18:13:24 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:50.166 18:13:24 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:50.166 18:13:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:50.166 18:13:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:50.423 18:13:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:50.423 18:13:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:50.987 18:13:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:50.987 18:13:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:50.987 18:13:25 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:50.987 18:13:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:50.987 18:13:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:50.987 18:13:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:50.987 18:13:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:50.987 18:13:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:50.987 18:13:25 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:50.987 18:13:25 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:50.987 18:13:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:50.987 18:13:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:50.987 18:13:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:50.987 18:13:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:50.987 18:13:25 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:50.987 18:13:25 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:50.987 18:13:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:50.987 18:13:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:50.987 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:50.987 fio-3.35 00:11:50.987 Starting 1 thread 00:11:54.322 00:11:54.322 test: (groupid=0, jobs=1): err= 0: pid=65775: Tue Nov 26 18:13:28 2024 00:11:54.322 read: IOPS=15.8k, BW=61.6MiB/s (64.6MB/s)(123MiB/2001msec) 00:11:54.322 slat (nsec): min=4625, max=46644, avg=6561.34, stdev=2163.09 00:11:54.322 clat (usec): min=481, max=10141, avg=4036.16, stdev=776.80 00:11:54.322 lat (usec): min=487, max=10160, avg=4042.72, stdev=777.91 00:11:54.322 clat percentiles (usec): 00:11:54.322 | 1.00th=[ 3261], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3490], 00:11:54.322 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3720], 60.00th=[ 4080], 00:11:54.322 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 5669], 00:11:54.322 | 99.00th=[ 6849], 99.50th=[ 7242], 99.90th=[ 8848], 99.95th=[ 9634], 00:11:54.322 | 99.99th=[10159] 00:11:54.322 bw ( KiB/s): min=61816, max=66328, per=100.00%, avg=63544.00, stdev=2434.31, samples=3 00:11:54.322 iops : min=15454, max=16582, avg=15886.00, stdev=608.58, samples=3 00:11:54.322 write: IOPS=15.8k, BW=61.7MiB/s (64.7MB/s)(123MiB/2001msec); 0 zone resets 00:11:54.322 slat (nsec): min=4774, max=74030, avg=6740.09, stdev=2097.63 00:11:54.322 clat (usec): min=304, max=10227, avg=4048.24, stdev=782.60 00:11:54.322 lat (usec): min=311, max=10232, avg=4054.98, stdev=783.70 00:11:54.322 clat percentiles (usec): 00:11:54.322 | 1.00th=[ 3261], 5.00th=[ 3359], 10.00th=[ 3425], 20.00th=[ 3490], 00:11:54.322 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3720], 60.00th=[ 4113], 00:11:54.322 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4686], 95.00th=[ 5669], 00:11:54.322 | 99.00th=[ 6849], 99.50th=[ 7242], 99.90th=[ 9241], 99.95th=[ 9896], 00:11:54.322 | 99.99th=[10159] 00:11:54.322 bw ( KiB/s): min=61960, max=65616, per=100.00%, avg=63248.00, stdev=2053.38, samples=3 00:11:54.322 iops : min=15490, max=16404, avg=15812.00, stdev=513.35, samples=3 00:11:54.322 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:11:54.322 lat (msec) : 2=0.05%, 4=58.24%, 10=41.64%, 20=0.03% 00:11:54.322 cpu : usr=99.05%, sys=0.00%, ctx=2, majf=0, minf=606 00:11:54.322 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:54.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:54.322 issued rwts: total=31556,31584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.322 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:54.322 00:11:54.322 Run status group 0 (all jobs): 00:11:54.322 READ: bw=61.6MiB/s (64.6MB/s), 61.6MiB/s-61.6MiB/s (64.6MB/s-64.6MB/s), io=123MiB (129MB), run=2001-2001msec 00:11:54.322 WRITE: bw=61.7MiB/s (64.7MB/s), 61.7MiB/s-61.7MiB/s (64.7MB/s-64.7MB/s), io=123MiB (129MB), run=2001-2001msec 00:11:54.322 ----------------------------------------------------- 00:11:54.322 Suppressions used: 00:11:54.322 count bytes template 00:11:54.322 1 32 /usr/src/fio/parse.c 00:11:54.322 1 8 libtcmalloc_minimal.so 00:11:54.322 ----------------------------------------------------- 00:11:54.322 00:11:54.322 18:13:28 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:54.322 18:13:28 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:54.322 18:13:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:54.322 18:13:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:54.581 18:13:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:54.581 18:13:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:55.147 18:13:29 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:55.147 18:13:29 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:55.147 18:13:29 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:55.147 18:13:29 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:55.147 18:13:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:55.147 18:13:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:55.147 18:13:29 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:55.147 18:13:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:55.147 18:13:29 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:55.147 18:13:29 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:55.147 18:13:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:55.147 18:13:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:55.147 18:13:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:55.147 18:13:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:55.147 18:13:29 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:55.147 18:13:29 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:55.147 18:13:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:55.147 18:13:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:55.147 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:55.147 fio-3.35 00:11:55.147 Starting 1 thread 00:11:59.331 00:11:59.331 test: (groupid=0, jobs=1): err= 0: pid=65836: Tue Nov 26 18:13:33 2024 00:11:59.331 read: IOPS=18.1k, BW=70.7MiB/s (74.1MB/s)(142MiB/2001msec) 00:11:59.331 slat (nsec): min=4665, max=63723, avg=5995.01, stdev=1639.84 00:11:59.331 clat (usec): min=260, max=9126, avg=3513.47, stdev=358.43 00:11:59.331 lat (usec): min=267, max=9190, avg=3519.46, stdev=358.89 00:11:59.331 clat percentiles (usec): 00:11:59.331 | 1.00th=[ 2999], 5.00th=[ 3130], 10.00th=[ 3195], 20.00th=[ 3294], 00:11:59.331 | 30.00th=[ 3326], 40.00th=[ 3392], 50.00th=[ 3425], 60.00th=[ 3490], 00:11:59.331 | 70.00th=[ 3556], 80.00th=[ 3687], 90.00th=[ 4015], 95.00th=[ 4178], 00:11:59.331 | 99.00th=[ 4490], 99.50th=[ 4948], 99.90th=[ 6128], 99.95th=[ 7504], 00:11:59.331 | 99.99th=[ 8979] 00:11:59.331 bw ( KiB/s): min=66096, max=75816, per=99.66%, avg=72168.00, stdev=5294.00, samples=3 00:11:59.331 iops : min=16524, max=18954, avg=18042.00, stdev=1323.50, samples=3 00:11:59.331 write: IOPS=18.1k, BW=70.8MiB/s (74.3MB/s)(142MiB/2001msec); 0 zone resets 00:11:59.331 slat (nsec): min=4818, max=36464, avg=6170.86, stdev=1684.06 00:11:59.331 clat (usec): min=235, max=9000, avg=3522.07, stdev=358.72 00:11:59.331 lat (usec): min=240, max=9011, avg=3528.24, stdev=359.18 00:11:59.331 clat percentiles (usec): 00:11:59.331 | 1.00th=[ 2999], 5.00th=[ 3130], 10.00th=[ 3195], 20.00th=[ 3294], 00:11:59.331 | 30.00th=[ 3359], 40.00th=[ 3392], 50.00th=[ 3458], 60.00th=[ 3490], 00:11:59.331 | 70.00th=[ 3556], 80.00th=[ 3687], 90.00th=[ 4015], 95.00th=[ 4178], 00:11:59.331 | 99.00th=[ 4490], 99.50th=[ 4948], 99.90th=[ 6194], 99.95th=[ 7701], 00:11:59.331 | 99.99th=[ 8717] 00:11:59.331 bw ( KiB/s): min=66440, max=75744, per=99.41%, avg=72098.67, stdev=4968.02, samples=3 00:11:59.331 iops : min=16610, max=18936, avg=18024.67, stdev=1242.01, samples=3 00:11:59.331 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:59.331 lat (msec) : 2=0.05%, 4=89.16%, 10=10.75% 00:11:59.331 cpu : usr=99.15%, sys=0.00%, ctx=4, majf=0, minf=607 00:11:59.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:59.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:59.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:59.331 issued rwts: total=36224,36280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:59.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:59.331 00:11:59.331 Run status group 0 (all jobs): 00:11:59.331 READ: bw=70.7MiB/s (74.1MB/s), 70.7MiB/s-70.7MiB/s (74.1MB/s-74.1MB/s), io=142MiB (148MB), run=2001-2001msec 00:11:59.331 WRITE: bw=70.8MiB/s (74.3MB/s), 70.8MiB/s-70.8MiB/s (74.3MB/s-74.3MB/s), io=142MiB (149MB), run=2001-2001msec 00:11:59.331 ----------------------------------------------------- 00:11:59.331 Suppressions used: 00:11:59.331 count bytes template 00:11:59.331 1 32 /usr/src/fio/parse.c 00:11:59.331 1 8 libtcmalloc_minimal.so 00:11:59.331 ----------------------------------------------------- 00:11:59.331 00:11:59.331 18:13:33 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:59.331 18:13:33 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:59.331 18:13:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:59.331 18:13:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:59.589 18:13:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:59.589 18:13:33 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:59.847 18:13:34 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:59.847 18:13:34 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:59.847 18:13:34 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:59.847 18:13:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:59.847 18:13:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:59.847 18:13:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:59.847 18:13:34 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:59.847 18:13:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:59.847 18:13:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:59.847 18:13:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:59.847 18:13:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:59.847 18:13:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:59.847 18:13:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:59.847 18:13:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:59.847 18:13:34 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:59.847 18:13:34 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:59.847 18:13:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:59.847 18:13:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:00.106 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:00.106 fio-3.35 00:12:00.106 Starting 1 thread 00:12:04.287 00:12:04.287 test: (groupid=0, jobs=1): err= 0: pid=65903: Tue Nov 26 18:13:38 2024 00:12:04.287 read: IOPS=16.4k, BW=64.0MiB/s (67.1MB/s)(128MiB/2001msec) 00:12:04.287 slat (nsec): min=4645, max=82634, avg=6427.83, stdev=2240.53 00:12:04.287 clat (usec): min=304, max=10036, avg=3882.74, stdev=831.58 00:12:04.287 lat (usec): min=316, max=10098, avg=3889.17, stdev=832.75 00:12:04.287 clat percentiles (usec): 00:12:04.287 | 1.00th=[ 3097], 5.00th=[ 3195], 10.00th=[ 3228], 20.00th=[ 3326], 00:12:04.287 | 30.00th=[ 3359], 40.00th=[ 3425], 50.00th=[ 3523], 60.00th=[ 3785], 00:12:04.287 | 70.00th=[ 4146], 80.00th=[ 4293], 90.00th=[ 4817], 95.00th=[ 5407], 00:12:04.288 | 99.00th=[ 7242], 99.50th=[ 7373], 99.90th=[ 7898], 99.95th=[ 8094], 00:12:04.288 | 99.99th=[ 9765] 00:12:04.288 bw ( KiB/s): min=53944, max=71248, per=99.43%, avg=65125.33, stdev=9697.92, samples=3 00:12:04.288 iops : min=13486, max=17812, avg=16281.33, stdev=2424.48, samples=3 00:12:04.288 write: IOPS=16.4k, BW=64.1MiB/s (67.2MB/s)(128MiB/2001msec); 0 zone resets 00:12:04.288 slat (nsec): min=4759, max=47769, avg=6603.92, stdev=2177.65 00:12:04.288 clat (usec): min=254, max=9856, avg=3898.87, stdev=839.92 00:12:04.288 lat (usec): min=263, max=9865, avg=3905.47, stdev=841.09 00:12:04.288 clat percentiles (usec): 00:12:04.288 | 1.00th=[ 3097], 5.00th=[ 3195], 10.00th=[ 3261], 20.00th=[ 3326], 00:12:04.288 | 30.00th=[ 3392], 40.00th=[ 3458], 50.00th=[ 3523], 60.00th=[ 3851], 00:12:04.288 | 70.00th=[ 4146], 80.00th=[ 4293], 90.00th=[ 4817], 95.00th=[ 5473], 00:12:04.288 | 99.00th=[ 7242], 99.50th=[ 7439], 99.90th=[ 7963], 99.95th=[ 8225], 00:12:04.288 | 99.99th=[ 9634] 00:12:04.288 bw ( KiB/s): min=54168, max=70776, per=98.92%, avg=64904.00, stdev=9311.30, samples=3 00:12:04.288 iops : min=13542, max=17694, avg=16226.00, stdev=2327.82, samples=3 00:12:04.288 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:12:04.288 lat (msec) : 2=0.05%, 4=63.22%, 10=36.69%, 20=0.01% 00:12:04.288 cpu : usr=98.95%, sys=0.05%, ctx=4, majf=0, minf=605 00:12:04.288 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:04.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:04.288 issued rwts: total=32764,32823,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:04.288 00:12:04.288 Run status group 0 (all jobs): 00:12:04.288 READ: bw=64.0MiB/s (67.1MB/s), 64.0MiB/s-64.0MiB/s (67.1MB/s-67.1MB/s), io=128MiB (134MB), run=2001-2001msec 00:12:04.288 WRITE: bw=64.1MiB/s (67.2MB/s), 64.1MiB/s-64.1MiB/s (67.2MB/s-67.2MB/s), io=128MiB (134MB), run=2001-2001msec 00:12:04.288 ----------------------------------------------------- 00:12:04.288 Suppressions used: 00:12:04.288 count bytes template 00:12:04.288 1 32 /usr/src/fio/parse.c 00:12:04.288 1 8 libtcmalloc_minimal.so 00:12:04.288 ----------------------------------------------------- 00:12:04.288 00:12:04.288 18:13:38 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:04.288 18:13:38 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:12:04.288 00:12:04.288 real 0m18.456s 00:12:04.288 user 0m14.929s 00:12:04.288 sys 0m1.907s 00:12:04.288 18:13:38 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.288 ************************************ 00:12:04.288 END TEST nvme_fio 00:12:04.288 18:13:38 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:12:04.288 ************************************ 00:12:04.546 00:12:04.546 real 1m34.074s 00:12:04.546 user 3m50.374s 00:12:04.546 sys 0m15.474s 00:12:04.546 18:13:38 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.546 18:13:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:04.546 ************************************ 00:12:04.546 END TEST nvme 00:12:04.546 ************************************ 00:12:04.546 18:13:38 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:12:04.546 18:13:38 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:04.546 18:13:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:04.546 18:13:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.546 18:13:38 -- common/autotest_common.sh@10 -- # set +x 00:12:04.546 ************************************ 00:12:04.546 START TEST nvme_scc 00:12:04.546 ************************************ 00:12:04.546 18:13:38 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:04.546 * Looking for test storage... 00:12:04.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:04.546 18:13:38 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:04.546 18:13:38 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:04.546 18:13:38 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:04.546 18:13:38 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:04.546 18:13:38 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.546 18:13:38 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.546 18:13:38 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.546 18:13:38 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.546 18:13:38 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.546 18:13:38 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.546 18:13:38 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.546 18:13:38 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.546 18:13:38 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.546 18:13:38 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.546 18:13:38 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.546 18:13:38 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:12:04.546 18:13:38 nvme_scc -- scripts/common.sh@345 -- # : 1 00:12:04.546 18:13:38 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.546 18:13:38 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.546 18:13:38 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:12:04.546 18:13:38 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:12:04.546 18:13:39 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.546 18:13:39 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:12:04.546 18:13:39 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.546 18:13:39 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:12:04.546 18:13:39 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:12:04.546 18:13:39 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.804 18:13:39 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:12:04.804 18:13:39 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.804 18:13:39 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.804 18:13:39 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.804 18:13:39 nvme_scc -- scripts/common.sh@368 -- # return 0 00:12:04.804 18:13:39 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.804 18:13:39 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:04.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.804 --rc genhtml_branch_coverage=1 00:12:04.804 --rc genhtml_function_coverage=1 00:12:04.804 --rc genhtml_legend=1 00:12:04.804 --rc geninfo_all_blocks=1 00:12:04.804 --rc geninfo_unexecuted_blocks=1 00:12:04.804 00:12:04.804 ' 00:12:04.804 18:13:39 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:04.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.804 --rc genhtml_branch_coverage=1 00:12:04.804 --rc genhtml_function_coverage=1 00:12:04.804 --rc genhtml_legend=1 00:12:04.804 --rc geninfo_all_blocks=1 00:12:04.804 --rc geninfo_unexecuted_blocks=1 00:12:04.804 00:12:04.804 ' 00:12:04.804 18:13:39 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:04.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.804 --rc genhtml_branch_coverage=1 00:12:04.804 --rc genhtml_function_coverage=1 00:12:04.804 --rc genhtml_legend=1 00:12:04.804 --rc geninfo_all_blocks=1 00:12:04.804 --rc geninfo_unexecuted_blocks=1 00:12:04.804 00:12:04.804 ' 00:12:04.804 18:13:39 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:04.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.804 --rc genhtml_branch_coverage=1 00:12:04.804 --rc genhtml_function_coverage=1 00:12:04.804 --rc genhtml_legend=1 00:12:04.804 --rc geninfo_all_blocks=1 00:12:04.804 --rc geninfo_unexecuted_blocks=1 00:12:04.804 00:12:04.804 ' 00:12:04.804 18:13:39 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:04.804 18:13:39 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:04.804 18:13:39 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:04.804 18:13:39 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:04.804 18:13:39 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:04.804 18:13:39 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.804 18:13:39 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.804 18:13:39 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.804 18:13:39 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.804 18:13:39 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.804 18:13:39 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.804 18:13:39 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.805 18:13:39 nvme_scc -- paths/export.sh@5 -- # export PATH 00:12:04.805 18:13:39 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.805 18:13:39 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:12:04.805 18:13:39 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:04.805 18:13:39 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:12:04.805 18:13:39 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:04.805 18:13:39 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:12:04.805 18:13:39 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:04.805 18:13:39 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:04.805 18:13:39 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:04.805 18:13:39 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:12:04.805 18:13:39 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:04.805 18:13:39 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:12:04.805 18:13:39 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:04.805 18:13:39 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:04.805 18:13:39 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:05.062 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:05.320 Waiting for block devices as requested 00:12:05.320 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:05.320 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:05.320 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:05.578 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:10.855 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:10.855 18:13:44 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:10.855 18:13:44 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:10.855 18:13:44 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:10.855 18:13:44 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:10.855 18:13:44 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.855 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:10.856 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.857 18:13:44 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.858 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:10.859 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:10.860 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:10.861 18:13:45 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:10.861 18:13:45 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:10.861 18:13:45 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:10.861 18:13:45 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:10.861 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:10.862 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.863 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.864 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.865 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:10.866 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:10.867 18:13:45 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:10.867 18:13:45 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:10.867 18:13:45 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:10.867 18:13:45 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:10.867 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:10.868 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.869 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.870 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:12:10.871 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:10.872 18:13:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.136 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:11.137 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.138 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:11.139 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.140 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.141 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:11.142 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:11.143 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:11.144 18:13:45 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:11.144 18:13:45 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:11.144 18:13:45 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:11.144 18:13:45 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:11.144 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.145 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.146 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:11.147 18:13:45 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:12:11.147 18:13:45 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:12:11.148 18:13:45 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:12:11.148 18:13:45 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:12:11.148 18:13:45 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:12:11.148 18:13:45 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:11.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:12.292 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.292 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.292 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.292 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:12.292 18:13:46 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:12.292 18:13:46 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:12.292 18:13:46 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.292 18:13:46 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:12.292 ************************************ 00:12:12.292 START TEST nvme_simple_copy 00:12:12.292 ************************************ 00:12:12.292 18:13:46 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:12.876 Initializing NVMe Controllers 00:12:12.876 Attaching to 0000:00:10.0 00:12:12.876 Controller supports SCC. Attached to 0000:00:10.0 00:12:12.876 Namespace ID: 1 size: 6GB 00:12:12.876 Initialization complete. 00:12:12.876 00:12:12.876 Controller QEMU NVMe Ctrl (12340 ) 00:12:12.876 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:12:12.876 Namespace Block Size:4096 00:12:12.876 Writing LBAs 0 to 63 with Random Data 00:12:12.876 Copied LBAs from 0 - 63 to the Destination LBA 256 00:12:12.876 LBAs matching Written Data: 64 00:12:12.876 00:12:12.876 real 0m0.323s 00:12:12.876 user 0m0.136s 00:12:12.876 sys 0m0.084s 00:12:12.876 18:13:47 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.876 18:13:47 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:12:12.876 ************************************ 00:12:12.876 END TEST nvme_simple_copy 00:12:12.876 ************************************ 00:12:12.876 00:12:12.876 real 0m8.278s 00:12:12.876 user 0m1.513s 00:12:12.876 sys 0m1.730s 00:12:12.876 18:13:47 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:12.876 18:13:47 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:12.876 ************************************ 00:12:12.876 END TEST nvme_scc 00:12:12.876 ************************************ 00:12:12.876 18:13:47 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:12:12.876 18:13:47 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:12:12.876 18:13:47 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:12:12.876 18:13:47 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:12:12.876 18:13:47 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:12:12.876 18:13:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:12.876 18:13:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.876 18:13:47 -- common/autotest_common.sh@10 -- # set +x 00:12:12.876 ************************************ 00:12:12.876 START TEST nvme_fdp 00:12:12.876 ************************************ 00:12:12.876 18:13:47 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:12:12.876 * Looking for test storage... 00:12:12.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:12.876 18:13:47 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:12.876 18:13:47 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:12.876 18:13:47 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:12:12.876 18:13:47 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:12.876 18:13:47 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:12:13.134 18:13:47 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:12:13.134 18:13:47 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:12:13.134 18:13:47 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:12:13.134 18:13:47 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:13.134 18:13:47 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:12:13.134 18:13:47 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:12:13.134 18:13:47 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:13.134 18:13:47 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:13.134 18:13:47 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:12:13.134 18:13:47 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:13.134 18:13:47 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:13.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.134 --rc genhtml_branch_coverage=1 00:12:13.134 --rc genhtml_function_coverage=1 00:12:13.134 --rc genhtml_legend=1 00:12:13.134 --rc geninfo_all_blocks=1 00:12:13.134 --rc geninfo_unexecuted_blocks=1 00:12:13.134 00:12:13.134 ' 00:12:13.134 18:13:47 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:13.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.134 --rc genhtml_branch_coverage=1 00:12:13.134 --rc genhtml_function_coverage=1 00:12:13.134 --rc genhtml_legend=1 00:12:13.134 --rc geninfo_all_blocks=1 00:12:13.134 --rc geninfo_unexecuted_blocks=1 00:12:13.134 00:12:13.134 ' 00:12:13.134 18:13:47 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:13.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.134 --rc genhtml_branch_coverage=1 00:12:13.134 --rc genhtml_function_coverage=1 00:12:13.134 --rc genhtml_legend=1 00:12:13.134 --rc geninfo_all_blocks=1 00:12:13.134 --rc geninfo_unexecuted_blocks=1 00:12:13.134 00:12:13.134 ' 00:12:13.134 18:13:47 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:13.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:13.135 --rc genhtml_branch_coverage=1 00:12:13.135 --rc genhtml_function_coverage=1 00:12:13.135 --rc genhtml_legend=1 00:12:13.135 --rc geninfo_all_blocks=1 00:12:13.135 --rc geninfo_unexecuted_blocks=1 00:12:13.135 00:12:13.135 ' 00:12:13.135 18:13:47 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:13.135 18:13:47 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:13.135 18:13:47 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:13.135 18:13:47 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:13.135 18:13:47 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:13.135 18:13:47 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:12:13.135 18:13:47 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:13.135 18:13:47 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:13.135 18:13:47 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:13.135 18:13:47 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.135 18:13:47 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.135 18:13:47 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.135 18:13:47 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:12:13.135 18:13:47 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:13.135 18:13:47 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:12:13.135 18:13:47 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:13.135 18:13:47 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:12:13.135 18:13:47 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:13.135 18:13:47 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:12:13.135 18:13:47 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:13.135 18:13:47 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:13.135 18:13:47 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:13.135 18:13:47 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:12:13.135 18:13:47 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:13.135 18:13:47 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:13.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:13.651 Waiting for block devices as requested 00:12:13.651 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:13.651 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:13.651 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:13.909 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:19.177 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:19.177 18:13:53 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:19.177 18:13:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:19.177 18:13:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:19.177 18:13:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:19.177 18:13:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.177 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:19.178 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:19.179 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.180 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.181 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:19.182 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:19.183 18:13:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:19.183 18:13:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:19.183 18:13:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:19.183 18:13:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:19.183 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:19.184 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.185 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:19.186 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.187 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:19.188 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:19.189 18:13:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:19.190 18:13:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:19.190 18:13:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:19.190 18:13:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:19.190 18:13:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:19.190 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:19.454 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.454 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.454 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:19.454 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:19.454 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:19.454 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.454 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.454 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:19.455 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.456 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.457 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.458 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.459 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.460 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.461 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.462 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:19.463 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:19.464 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.465 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:19.466 18:13:53 nvme_fdp -- scripts/common.sh@18 -- # local i 00:12:19.466 18:13:53 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:19.466 18:13:53 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:19.466 18:13:53 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:19.466 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:19.467 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.468 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.727 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:19.728 18:13:53 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:12:19.728 18:13:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:12:19.729 18:13:53 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:12:19.729 18:13:53 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:12:19.729 18:13:53 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:12:19.729 18:13:53 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:19.987 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:20.920 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:20.920 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:20.920 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:20.920 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:20.920 18:13:55 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:20.920 18:13:55 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:20.920 18:13:55 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.920 18:13:55 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:20.920 ************************************ 00:12:20.920 START TEST nvme_flexible_data_placement 00:12:20.920 ************************************ 00:12:20.920 18:13:55 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:21.179 Initializing NVMe Controllers 00:12:21.179 Attaching to 0000:00:13.0 00:12:21.179 Controller supports FDP Attached to 0000:00:13.0 00:12:21.179 Namespace ID: 1 Endurance Group ID: 1 00:12:21.179 Initialization complete. 00:12:21.179 00:12:21.179 ================================== 00:12:21.179 == FDP tests for Namespace: #01 == 00:12:21.179 ================================== 00:12:21.179 00:12:21.179 Get Feature: FDP: 00:12:21.179 ================= 00:12:21.179 Enabled: Yes 00:12:21.179 FDP configuration Index: 0 00:12:21.179 00:12:21.179 FDP configurations log page 00:12:21.179 =========================== 00:12:21.179 Number of FDP configurations: 1 00:12:21.179 Version: 0 00:12:21.179 Size: 112 00:12:21.179 FDP Configuration Descriptor: 0 00:12:21.179 Descriptor Size: 96 00:12:21.179 Reclaim Group Identifier format: 2 00:12:21.179 FDP Volatile Write Cache: Not Present 00:12:21.179 FDP Configuration: Valid 00:12:21.179 Vendor Specific Size: 0 00:12:21.179 Number of Reclaim Groups: 2 00:12:21.179 Number of Recalim Unit Handles: 8 00:12:21.179 Max Placement Identifiers: 128 00:12:21.179 Number of Namespaces Suppprted: 256 00:12:21.179 Reclaim unit Nominal Size: 6000000 bytes 00:12:21.179 Estimated Reclaim Unit Time Limit: Not Reported 00:12:21.179 RUH Desc #000: RUH Type: Initially Isolated 00:12:21.179 RUH Desc #001: RUH Type: Initially Isolated 00:12:21.179 RUH Desc #002: RUH Type: Initially Isolated 00:12:21.179 RUH Desc #003: RUH Type: Initially Isolated 00:12:21.179 RUH Desc #004: RUH Type: Initially Isolated 00:12:21.179 RUH Desc #005: RUH Type: Initially Isolated 00:12:21.179 RUH Desc #006: RUH Type: Initially Isolated 00:12:21.179 RUH Desc #007: RUH Type: Initially Isolated 00:12:21.179 00:12:21.179 FDP reclaim unit handle usage log page 00:12:21.179 ====================================== 00:12:21.179 Number of Reclaim Unit Handles: 8 00:12:21.179 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:21.179 RUH Usage Desc #001: RUH Attributes: Unused 00:12:21.179 RUH Usage Desc #002: RUH Attributes: Unused 00:12:21.179 RUH Usage Desc #003: RUH Attributes: Unused 00:12:21.179 RUH Usage Desc #004: RUH Attributes: Unused 00:12:21.179 RUH Usage Desc #005: RUH Attributes: Unused 00:12:21.179 RUH Usage Desc #006: RUH Attributes: Unused 00:12:21.179 RUH Usage Desc #007: RUH Attributes: Unused 00:12:21.179 00:12:21.179 FDP statistics log page 00:12:21.179 ======================= 00:12:21.179 Host bytes with metadata written: 834936832 00:12:21.179 Media bytes with metadata written: 835100672 00:12:21.179 Media bytes erased: 0 00:12:21.179 00:12:21.179 FDP Reclaim unit handle status 00:12:21.179 ============================== 00:12:21.179 Number of RUHS descriptors: 2 00:12:21.179 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000043be 00:12:21.179 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:21.179 00:12:21.179 FDP write on placement id: 0 success 00:12:21.179 00:12:21.179 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:21.179 00:12:21.179 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:21.179 00:12:21.179 Get Feature: FDP Events for Placement handle: #0 00:12:21.179 ======================== 00:12:21.179 Number of FDP Events: 6 00:12:21.179 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:21.179 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:21.179 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:21.179 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:21.179 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:21.179 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:21.179 00:12:21.179 FDP events log page 00:12:21.179 =================== 00:12:21.179 Number of FDP events: 1 00:12:21.179 FDP Event #0: 00:12:21.179 Event Type: RU Not Written to Capacity 00:12:21.179 Placement Identifier: Valid 00:12:21.179 NSID: Valid 00:12:21.179 Location: Valid 00:12:21.179 Placement Identifier: 0 00:12:21.179 Event Timestamp: 7 00:12:21.179 Namespace Identifier: 1 00:12:21.179 Reclaim Group Identifier: 0 00:12:21.179 Reclaim Unit Handle Identifier: 0 00:12:21.179 00:12:21.179 FDP test passed 00:12:21.179 00:12:21.179 real 0m0.280s 00:12:21.179 user 0m0.096s 00:12:21.179 sys 0m0.082s 00:12:21.179 18:13:55 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.179 18:13:55 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:12:21.179 ************************************ 00:12:21.179 END TEST nvme_flexible_data_placement 00:12:21.180 ************************************ 00:12:21.180 00:12:21.180 real 0m8.372s 00:12:21.180 user 0m1.501s 00:12:21.180 sys 0m1.848s 00:12:21.180 18:13:55 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.180 ************************************ 00:12:21.180 END TEST nvme_fdp 00:12:21.180 18:13:55 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:12:21.180 ************************************ 00:12:21.180 18:13:55 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:12:21.180 18:13:55 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:21.180 18:13:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:21.180 18:13:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.180 18:13:55 -- common/autotest_common.sh@10 -- # set +x 00:12:21.180 ************************************ 00:12:21.180 START TEST nvme_rpc 00:12:21.180 ************************************ 00:12:21.180 18:13:55 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:21.438 * Looking for test storage... 00:12:21.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:21.438 18:13:55 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:21.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.438 --rc genhtml_branch_coverage=1 00:12:21.438 --rc genhtml_function_coverage=1 00:12:21.438 --rc genhtml_legend=1 00:12:21.438 --rc geninfo_all_blocks=1 00:12:21.438 --rc geninfo_unexecuted_blocks=1 00:12:21.438 00:12:21.438 ' 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:21.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.438 --rc genhtml_branch_coverage=1 00:12:21.438 --rc genhtml_function_coverage=1 00:12:21.438 --rc genhtml_legend=1 00:12:21.438 --rc geninfo_all_blocks=1 00:12:21.438 --rc geninfo_unexecuted_blocks=1 00:12:21.438 00:12:21.438 ' 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:21.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.438 --rc genhtml_branch_coverage=1 00:12:21.438 --rc genhtml_function_coverage=1 00:12:21.438 --rc genhtml_legend=1 00:12:21.438 --rc geninfo_all_blocks=1 00:12:21.438 --rc geninfo_unexecuted_blocks=1 00:12:21.438 00:12:21.438 ' 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:21.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:21.438 --rc genhtml_branch_coverage=1 00:12:21.438 --rc genhtml_function_coverage=1 00:12:21.438 --rc genhtml_legend=1 00:12:21.438 --rc geninfo_all_blocks=1 00:12:21.438 --rc geninfo_unexecuted_blocks=1 00:12:21.438 00:12:21.438 ' 00:12:21.438 18:13:55 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:21.438 18:13:55 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:21.438 18:13:55 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:21.439 18:13:55 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:21.439 18:13:55 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:12:21.439 18:13:55 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:12:21.439 18:13:55 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67293 00:12:21.439 18:13:55 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:21.439 18:13:55 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:21.439 18:13:55 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67293 00:12:21.439 18:13:55 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67293 ']' 00:12:21.439 18:13:55 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.439 18:13:55 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.439 18:13:55 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.439 18:13:55 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.439 18:13:55 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:21.696 [2024-11-26 18:13:55.952612] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:12:21.697 [2024-11-26 18:13:55.952851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67293 ] 00:12:21.697 [2024-11-26 18:13:56.139661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:21.955 [2024-11-26 18:13:56.276421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.955 [2024-11-26 18:13:56.276429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.914 18:13:57 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.914 18:13:57 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:22.914 18:13:57 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:12:23.171 Nvme0n1 00:12:23.171 18:13:57 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:23.172 18:13:57 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:23.428 request: 00:12:23.428 { 00:12:23.428 "bdev_name": "Nvme0n1", 00:12:23.428 "filename": "non_existing_file", 00:12:23.428 "method": "bdev_nvme_apply_firmware", 00:12:23.428 "req_id": 1 00:12:23.428 } 00:12:23.428 Got JSON-RPC error response 00:12:23.428 response: 00:12:23.428 { 00:12:23.428 "code": -32603, 00:12:23.428 "message": "open file failed." 00:12:23.428 } 00:12:23.428 18:13:57 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:23.428 18:13:57 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:23.428 18:13:57 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:23.685 18:13:58 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:23.685 18:13:58 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67293 00:12:23.685 18:13:58 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67293 ']' 00:12:23.685 18:13:58 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67293 00:12:23.685 18:13:58 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:23.685 18:13:58 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.685 18:13:58 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67293 00:12:23.942 18:13:58 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.942 18:13:58 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.942 killing process with pid 67293 00:12:23.942 18:13:58 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67293' 00:12:23.942 18:13:58 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67293 00:12:23.942 18:13:58 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67293 00:12:26.479 00:12:26.479 real 0m4.795s 00:12:26.479 user 0m9.199s 00:12:26.479 sys 0m0.814s 00:12:26.479 18:14:00 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.479 18:14:00 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.479 ************************************ 00:12:26.479 END TEST nvme_rpc 00:12:26.479 ************************************ 00:12:26.479 18:14:00 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:26.479 18:14:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:26.479 18:14:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.479 18:14:00 -- common/autotest_common.sh@10 -- # set +x 00:12:26.479 ************************************ 00:12:26.479 START TEST nvme_rpc_timeouts 00:12:26.479 ************************************ 00:12:26.479 18:14:00 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:26.479 * Looking for test storage... 00:12:26.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:26.479 18:14:00 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:26.479 18:14:00 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:12:26.479 18:14:00 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:26.479 18:14:00 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.479 18:14:00 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:12:26.479 18:14:00 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.479 18:14:00 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:26.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.479 --rc genhtml_branch_coverage=1 00:12:26.479 --rc genhtml_function_coverage=1 00:12:26.479 --rc genhtml_legend=1 00:12:26.479 --rc geninfo_all_blocks=1 00:12:26.479 --rc geninfo_unexecuted_blocks=1 00:12:26.479 00:12:26.479 ' 00:12:26.479 18:14:00 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:26.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.479 --rc genhtml_branch_coverage=1 00:12:26.479 --rc genhtml_function_coverage=1 00:12:26.479 --rc genhtml_legend=1 00:12:26.479 --rc geninfo_all_blocks=1 00:12:26.479 --rc geninfo_unexecuted_blocks=1 00:12:26.480 00:12:26.480 ' 00:12:26.480 18:14:00 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:26.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.480 --rc genhtml_branch_coverage=1 00:12:26.480 --rc genhtml_function_coverage=1 00:12:26.480 --rc genhtml_legend=1 00:12:26.480 --rc geninfo_all_blocks=1 00:12:26.480 --rc geninfo_unexecuted_blocks=1 00:12:26.480 00:12:26.480 ' 00:12:26.480 18:14:00 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:26.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.480 --rc genhtml_branch_coverage=1 00:12:26.480 --rc genhtml_function_coverage=1 00:12:26.480 --rc genhtml_legend=1 00:12:26.480 --rc geninfo_all_blocks=1 00:12:26.480 --rc geninfo_unexecuted_blocks=1 00:12:26.480 00:12:26.480 ' 00:12:26.480 18:14:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:26.480 18:14:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67369 00:12:26.480 18:14:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67369 00:12:26.480 18:14:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67405 00:12:26.480 18:14:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:26.480 18:14:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:26.480 18:14:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67405 00:12:26.480 18:14:00 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67405 ']' 00:12:26.480 18:14:00 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.480 18:14:00 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.480 18:14:00 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.480 18:14:00 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.480 18:14:00 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:26.480 [2024-11-26 18:14:00.788812] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:12:26.480 [2024-11-26 18:14:00.789842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67405 ] 00:12:26.737 [2024-11-26 18:14:00.978862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:26.737 [2024-11-26 18:14:01.140288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.737 [2024-11-26 18:14:01.140291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.669 18:14:02 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.669 Checking default timeout settings: 00:12:27.669 18:14:02 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:12:27.669 18:14:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:27.669 18:14:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:28.235 Making settings changes with rpc: 00:12:28.235 18:14:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:28.235 18:14:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:28.492 Check default vs. modified settings: 00:12:28.492 18:14:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:28.492 18:14:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:29.064 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:29.064 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:29.064 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67369 00:12:29.064 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:29.064 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:29.064 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67369 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:29.065 Setting action_on_timeout is changed as expected. 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67369 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67369 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:29.065 Setting timeout_us is changed as expected. 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67369 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67369 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:29.065 Setting timeout_admin_us is changed as expected. 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67369 /tmp/settings_modified_67369 00:12:29.065 18:14:03 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67405 00:12:29.065 18:14:03 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67405 ']' 00:12:29.065 18:14:03 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67405 00:12:29.065 18:14:03 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:12:29.065 18:14:03 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.065 18:14:03 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67405 00:12:29.065 killing process with pid 67405 00:12:29.065 18:14:03 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.065 18:14:03 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.065 18:14:03 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67405' 00:12:29.065 18:14:03 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67405 00:12:29.065 18:14:03 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67405 00:12:31.591 RPC TIMEOUT SETTING TEST PASSED. 00:12:31.591 18:14:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:31.591 00:12:31.591 real 0m5.259s 00:12:31.591 user 0m10.193s 00:12:31.591 sys 0m0.852s 00:12:31.591 ************************************ 00:12:31.591 END TEST nvme_rpc_timeouts 00:12:31.591 ************************************ 00:12:31.591 18:14:05 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.591 18:14:05 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:12:31.591 18:14:05 -- spdk/autotest.sh@239 -- # uname -s 00:12:31.591 18:14:05 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:12:31.591 18:14:05 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:31.591 18:14:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:31.591 18:14:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.591 18:14:05 -- common/autotest_common.sh@10 -- # set +x 00:12:31.591 ************************************ 00:12:31.591 START TEST sw_hotplug 00:12:31.591 ************************************ 00:12:31.591 18:14:05 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:12:31.591 * Looking for test storage... 00:12:31.591 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:31.591 18:14:05 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:31.591 18:14:05 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:31.591 18:14:05 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:12:31.591 18:14:05 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:31.591 18:14:05 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:12:31.591 18:14:05 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.591 18:14:05 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:31.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.591 --rc genhtml_branch_coverage=1 00:12:31.591 --rc genhtml_function_coverage=1 00:12:31.591 --rc genhtml_legend=1 00:12:31.591 --rc geninfo_all_blocks=1 00:12:31.591 --rc geninfo_unexecuted_blocks=1 00:12:31.591 00:12:31.591 ' 00:12:31.591 18:14:05 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:31.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.591 --rc genhtml_branch_coverage=1 00:12:31.591 --rc genhtml_function_coverage=1 00:12:31.591 --rc genhtml_legend=1 00:12:31.591 --rc geninfo_all_blocks=1 00:12:31.591 --rc geninfo_unexecuted_blocks=1 00:12:31.591 00:12:31.591 ' 00:12:31.591 18:14:05 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:31.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.591 --rc genhtml_branch_coverage=1 00:12:31.591 --rc genhtml_function_coverage=1 00:12:31.591 --rc genhtml_legend=1 00:12:31.591 --rc geninfo_all_blocks=1 00:12:31.591 --rc geninfo_unexecuted_blocks=1 00:12:31.591 00:12:31.591 ' 00:12:31.591 18:14:05 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:31.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.591 --rc genhtml_branch_coverage=1 00:12:31.591 --rc genhtml_function_coverage=1 00:12:31.592 --rc genhtml_legend=1 00:12:31.592 --rc geninfo_all_blocks=1 00:12:31.592 --rc geninfo_unexecuted_blocks=1 00:12:31.592 00:12:31.592 ' 00:12:31.592 18:14:05 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:31.849 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:32.106 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:32.106 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:32.106 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:32.106 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:32.106 18:14:06 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:12:32.106 18:14:06 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:12:32.106 18:14:06 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:12:32.106 18:14:06 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@233 -- # local class 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:32.106 18:14:06 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@18 -- # local i 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:12:32.107 18:14:06 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:32.107 18:14:06 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:12:32.107 18:14:06 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:12:32.107 18:14:06 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:32.671 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:32.671 Waiting for block devices as requested 00:12:32.671 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:32.929 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:32.929 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:32.929 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:38.188 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:38.188 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:12:38.188 18:14:12 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:38.446 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:12:38.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:38.446 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:12:39.012 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:12:39.012 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:39.012 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:39.270 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:12:39.270 18:14:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.270 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:12:39.270 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:12:39.270 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68282 00:12:39.270 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:12:39.270 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:12:39.270 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:39.270 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:12:39.270 18:14:13 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:39.270 18:14:13 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:39.270 18:14:13 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:39.270 18:14:13 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:39.270 18:14:13 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:12:39.270 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:39.270 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:39.270 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:12:39.270 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:39.270 18:14:13 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:39.527 Initializing NVMe Controllers 00:12:39.527 Attaching to 0000:00:10.0 00:12:39.527 Attaching to 0000:00:11.0 00:12:39.527 Attached to 0000:00:10.0 00:12:39.527 Attached to 0000:00:11.0 00:12:39.527 Initialization complete. Starting I/O... 00:12:39.527 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:12:39.527 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:12:39.527 00:12:40.462 QEMU NVMe Ctrl (12340 ): 1095 I/Os completed (+1095) 00:12:40.462 QEMU NVMe Ctrl (12341 ): 1121 I/Os completed (+1121) 00:12:40.462 00:12:41.834 QEMU NVMe Ctrl (12340 ): 2495 I/Os completed (+1400) 00:12:41.834 QEMU NVMe Ctrl (12341 ): 2561 I/Os completed (+1440) 00:12:41.834 00:12:42.767 QEMU NVMe Ctrl (12340 ): 4131 I/Os completed (+1636) 00:12:42.767 QEMU NVMe Ctrl (12341 ): 4234 I/Os completed (+1673) 00:12:42.767 00:12:43.698 QEMU NVMe Ctrl (12340 ): 5803 I/Os completed (+1672) 00:12:43.698 QEMU NVMe Ctrl (12341 ): 5942 I/Os completed (+1708) 00:12:43.698 00:12:44.630 QEMU NVMe Ctrl (12340 ): 7419 I/Os completed (+1616) 00:12:44.630 QEMU NVMe Ctrl (12341 ): 7595 I/Os completed (+1653) 00:12:44.630 00:12:45.194 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:45.194 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:45.194 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:45.194 [2024-11-26 18:14:19.615074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:45.194 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:45.195 [2024-11-26 18:14:19.617801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.195 [2024-11-26 18:14:19.617892] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.195 [2024-11-26 18:14:19.617930] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.195 [2024-11-26 18:14:19.617963] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.195 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:45.195 [2024-11-26 18:14:19.621592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.195 [2024-11-26 18:14:19.621664] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.195 [2024-11-26 18:14:19.621695] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.195 [2024-11-26 18:14:19.621723] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.195 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:45.195 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:45.195 [2024-11-26 18:14:19.652596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:45.195 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:45.452 [2024-11-26 18:14:19.654884] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.452 [2024-11-26 18:14:19.654959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.452 [2024-11-26 18:14:19.655007] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.452 [2024-11-26 18:14:19.655038] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.452 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:45.452 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:45.452 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:45.452 [2024-11-26 18:14:19.658350] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.452 [2024-11-26 18:14:19.658409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.452 [2024-11-26 18:14:19.658442] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.452 [2024-11-26 18:14:19.658471] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:45.452 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:45.452 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:45.452 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:45.452 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:45.452 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:45.452 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:45.452 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:45.452 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:45.452 Attaching to 0000:00:10.0 00:12:45.452 Attached to 0000:00:10.0 00:12:45.452 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:12:45.452 00:12:45.718 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:45.718 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:45.718 18:14:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:45.718 Attaching to 0000:00:11.0 00:12:45.718 Attached to 0000:00:11.0 00:12:46.665 QEMU NVMe Ctrl (12340 ): 1576 I/Os completed (+1576) 00:12:46.665 QEMU NVMe Ctrl (12341 ): 1490 I/Os completed (+1490) 00:12:46.665 00:12:47.597 QEMU NVMe Ctrl (12340 ): 3223 I/Os completed (+1647) 00:12:47.597 QEMU NVMe Ctrl (12341 ): 3165 I/Os completed (+1675) 00:12:47.597 00:12:48.535 QEMU NVMe Ctrl (12340 ): 4859 I/Os completed (+1636) 00:12:48.536 QEMU NVMe Ctrl (12341 ): 4841 I/Os completed (+1676) 00:12:48.536 00:12:49.469 QEMU NVMe Ctrl (12340 ): 6354 I/Os completed (+1495) 00:12:49.469 QEMU NVMe Ctrl (12341 ): 6390 I/Os completed (+1549) 00:12:49.469 00:12:50.402 QEMU NVMe Ctrl (12340 ): 8011 I/Os completed (+1657) 00:12:50.402 QEMU NVMe Ctrl (12341 ): 8076 I/Os completed (+1686) 00:12:50.402 00:12:51.774 QEMU NVMe Ctrl (12340 ): 9527 I/Os completed (+1516) 00:12:51.774 QEMU NVMe Ctrl (12341 ): 9613 I/Os completed (+1537) 00:12:51.774 00:12:52.707 QEMU NVMe Ctrl (12340 ): 11139 I/Os completed (+1612) 00:12:52.707 QEMU NVMe Ctrl (12341 ): 11249 I/Os completed (+1636) 00:12:52.707 00:12:53.641 QEMU NVMe Ctrl (12340 ): 12719 I/Os completed (+1580) 00:12:53.641 QEMU NVMe Ctrl (12341 ): 12874 I/Os completed (+1625) 00:12:53.641 00:12:54.576 QEMU NVMe Ctrl (12340 ): 14480 I/Os completed (+1761) 00:12:54.576 QEMU NVMe Ctrl (12341 ): 14639 I/Os completed (+1765) 00:12:54.576 00:12:55.509 QEMU NVMe Ctrl (12340 ): 15882 I/Os completed (+1402) 00:12:55.509 QEMU NVMe Ctrl (12341 ): 16111 I/Os completed (+1472) 00:12:55.509 00:12:56.482 QEMU NVMe Ctrl (12340 ): 17406 I/Os completed (+1524) 00:12:56.482 QEMU NVMe Ctrl (12341 ): 17695 I/Os completed (+1584) 00:12:56.482 00:12:57.416 QEMU NVMe Ctrl (12340 ): 19018 I/Os completed (+1612) 00:12:57.416 QEMU NVMe Ctrl (12341 ): 19335 I/Os completed (+1640) 00:12:57.416 00:12:57.674 18:14:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:57.674 18:14:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:57.674 18:14:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:57.674 18:14:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:57.674 [2024-11-26 18:14:31.942818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:57.674 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:57.674 [2024-11-26 18:14:31.944837] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.674 [2024-11-26 18:14:31.944908] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.674 [2024-11-26 18:14:31.944940] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.674 [2024-11-26 18:14:31.944968] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.674 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:57.674 [2024-11-26 18:14:31.948077] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.674 [2024-11-26 18:14:31.948134] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.674 [2024-11-26 18:14:31.948159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.674 [2024-11-26 18:14:31.948184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.674 18:14:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:57.674 18:14:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:57.674 [2024-11-26 18:14:31.974418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:57.674 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:57.674 [2024-11-26 18:14:31.976305] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.674 [2024-11-26 18:14:31.976387] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.674 [2024-11-26 18:14:31.976421] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.674 [2024-11-26 18:14:31.976445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.674 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:57.674 [2024-11-26 18:14:31.979268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.674 [2024-11-26 18:14:31.979354] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.674 [2024-11-26 18:14:31.979382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.674 [2024-11-26 18:14:31.979406] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:57.674 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:57.674 EAL: Scan for (pci) bus failed. 00:12:57.674 18:14:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:57.674 18:14:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:57.674 18:14:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:57.674 18:14:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:57.674 18:14:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:57.932 18:14:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:57.932 18:14:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:57.932 18:14:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:57.932 18:14:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:57.932 18:14:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:57.932 Attaching to 0000:00:10.0 00:12:57.932 Attached to 0000:00:10.0 00:12:57.932 18:14:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:57.932 18:14:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:57.932 18:14:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:57.932 Attaching to 0000:00:11.0 00:12:57.932 Attached to 0000:00:11.0 00:12:58.498 QEMU NVMe Ctrl (12340 ): 1048 I/Os completed (+1048) 00:12:58.498 QEMU NVMe Ctrl (12341 ): 914 I/Os completed (+914) 00:12:58.498 00:12:59.433 QEMU NVMe Ctrl (12340 ): 2629 I/Os completed (+1581) 00:12:59.433 QEMU NVMe Ctrl (12341 ): 2523 I/Os completed (+1609) 00:12:59.433 00:13:00.805 QEMU NVMe Ctrl (12340 ): 4121 I/Os completed (+1492) 00:13:00.805 QEMU NVMe Ctrl (12341 ): 4057 I/Os completed (+1534) 00:13:00.805 00:13:01.737 QEMU NVMe Ctrl (12340 ): 5557 I/Os completed (+1436) 00:13:01.737 QEMU NVMe Ctrl (12341 ): 5557 I/Os completed (+1500) 00:13:01.737 00:13:02.671 QEMU NVMe Ctrl (12340 ): 7127 I/Os completed (+1570) 00:13:02.671 QEMU NVMe Ctrl (12341 ): 7188 I/Os completed (+1631) 00:13:02.671 00:13:03.604 QEMU NVMe Ctrl (12340 ): 8820 I/Os completed (+1693) 00:13:03.604 QEMU NVMe Ctrl (12341 ): 8913 I/Os completed (+1725) 00:13:03.604 00:13:04.538 QEMU NVMe Ctrl (12340 ): 10500 I/Os completed (+1680) 00:13:04.538 QEMU NVMe Ctrl (12341 ): 10616 I/Os completed (+1703) 00:13:04.538 00:13:05.474 QEMU NVMe Ctrl (12340 ): 11968 I/Os completed (+1468) 00:13:05.474 QEMU NVMe Ctrl (12341 ): 12102 I/Os completed (+1486) 00:13:05.474 00:13:06.412 QEMU NVMe Ctrl (12340 ): 13488 I/Os completed (+1520) 00:13:06.412 QEMU NVMe Ctrl (12341 ): 13629 I/Os completed (+1527) 00:13:06.412 00:13:07.786 QEMU NVMe Ctrl (12340 ): 15136 I/Os completed (+1648) 00:13:07.786 QEMU NVMe Ctrl (12341 ): 15322 I/Os completed (+1693) 00:13:07.786 00:13:08.717 QEMU NVMe Ctrl (12340 ): 16808 I/Os completed (+1672) 00:13:08.717 QEMU NVMe Ctrl (12341 ): 17025 I/Os completed (+1703) 00:13:08.717 00:13:09.651 QEMU NVMe Ctrl (12340 ): 18532 I/Os completed (+1724) 00:13:09.651 QEMU NVMe Ctrl (12341 ): 18768 I/Os completed (+1743) 00:13:09.652 00:13:09.916 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:09.916 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:09.916 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:09.916 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:09.916 [2024-11-26 18:14:44.276463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:09.916 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:09.916 [2024-11-26 18:14:44.278526] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.916 [2024-11-26 18:14:44.278611] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.916 [2024-11-26 18:14:44.278641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.916 [2024-11-26 18:14:44.278673] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.916 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:09.916 [2024-11-26 18:14:44.281854] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.916 [2024-11-26 18:14:44.281913] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.916 [2024-11-26 18:14:44.281937] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.917 [2024-11-26 18:14:44.281960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.917 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:09.917 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:09.917 [2024-11-26 18:14:44.310264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:09.917 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:09.917 [2024-11-26 18:14:44.312146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.917 [2024-11-26 18:14:44.312211] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.917 [2024-11-26 18:14:44.312243] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.917 [2024-11-26 18:14:44.312268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.917 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:09.917 [2024-11-26 18:14:44.314988] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.917 [2024-11-26 18:14:44.315044] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.917 [2024-11-26 18:14:44.315073] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.917 [2024-11-26 18:14:44.315096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.917 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:09.917 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:09.917 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:09.917 EAL: Scan for (pci) bus failed. 00:13:10.184 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:10.184 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:10.184 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:10.184 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:10.184 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:10.184 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:10.184 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:10.184 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:10.184 Attaching to 0000:00:10.0 00:13:10.184 Attached to 0000:00:10.0 00:13:10.184 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:10.184 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:10.184 18:14:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:10.184 Attaching to 0000:00:11.0 00:13:10.184 Attached to 0000:00:11.0 00:13:10.184 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:10.184 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:10.184 [2024-11-26 18:14:44.586156] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:13:22.400 18:14:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:22.400 18:14:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:22.400 18:14:56 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.97 00:13:22.400 18:14:56 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.97 00:13:22.400 18:14:56 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:22.400 18:14:56 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.97 00:13:22.400 18:14:56 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.97 2 00:13:22.400 remove_attach_helper took 42.97s to complete (handling 2 nvme drive(s)) 18:14:56 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:13:29.028 18:15:02 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68282 00:13:29.028 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68282) - No such process 00:13:29.028 18:15:02 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68282 00:13:29.028 18:15:02 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:13:29.028 18:15:02 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:13:29.028 18:15:02 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:13:29.028 18:15:02 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68826 00:13:29.028 18:15:02 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:29.028 18:15:02 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:13:29.028 18:15:02 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68826 00:13:29.028 18:15:02 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68826 ']' 00:13:29.028 18:15:02 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.028 18:15:02 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.028 18:15:02 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.028 18:15:02 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.028 18:15:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:29.028 [2024-11-26 18:15:02.746307] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:13:29.028 [2024-11-26 18:15:02.746495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68826 ] 00:13:29.028 [2024-11-26 18:15:02.930244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.028 [2024-11-26 18:15:03.088787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.594 18:15:04 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.594 18:15:04 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:13:29.594 18:15:04 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:29.594 18:15:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.594 18:15:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:29.594 18:15:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.594 18:15:04 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:13:29.594 18:15:04 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:29.594 18:15:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:29.594 18:15:04 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:29.594 18:15:04 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:29.594 18:15:04 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:29.594 18:15:04 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:29.594 18:15:04 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:13:29.594 18:15:04 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:29.594 18:15:04 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:29.594 18:15:04 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:29.594 18:15:04 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:29.594 18:15:04 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:36.150 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:36.150 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:36.150 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:36.150 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:36.150 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:36.150 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:36.150 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:36.150 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:36.150 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:36.150 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:36.150 18:15:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.150 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:36.150 18:15:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:36.150 [2024-11-26 18:15:10.123946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:36.150 [2024-11-26 18:15:10.127250] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.150 [2024-11-26 18:15:10.127318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.150 [2024-11-26 18:15:10.127352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.150 [2024-11-26 18:15:10.127473] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.150 [2024-11-26 18:15:10.127498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.150 [2024-11-26 18:15:10.127518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.150 [2024-11-26 18:15:10.127538] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.150 [2024-11-26 18:15:10.127571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.150 [2024-11-26 18:15:10.127589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.150 [2024-11-26 18:15:10.127613] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.150 [2024-11-26 18:15:10.127629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.150 [2024-11-26 18:15:10.127646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.150 18:15:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.150 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:36.150 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:36.408 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:36.408 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:36.408 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:36.408 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:36.408 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:36.408 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:36.408 18:15:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.408 18:15:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:36.408 18:15:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.408 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:36.408 18:15:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:36.408 [2024-11-26 18:15:10.823864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:36.408 [2024-11-26 18:15:10.827111] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.408 [2024-11-26 18:15:10.827180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.408 [2024-11-26 18:15:10.827205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.408 [2024-11-26 18:15:10.827232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.408 [2024-11-26 18:15:10.827282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.408 [2024-11-26 18:15:10.827329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.408 [2024-11-26 18:15:10.827349] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.408 [2024-11-26 18:15:10.827364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.408 [2024-11-26 18:15:10.827382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.408 [2024-11-26 18:15:10.827398] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:36.408 [2024-11-26 18:15:10.827415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:36.408 [2024-11-26 18:15:10.827430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:36.972 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:36.972 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:36.972 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:36.972 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:36.972 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:36.972 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:36.972 18:15:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.972 18:15:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:36.972 18:15:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.972 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:36.972 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:36.972 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:36.972 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:36.972 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:37.229 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:37.229 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:37.229 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:37.229 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:37.229 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:37.229 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:37.229 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:37.229 18:15:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:49.427 18:15:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.427 18:15:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:49.427 18:15:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:49.427 18:15:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.427 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:49.427 18:15:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:49.427 18:15:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.427 [2024-11-26 18:15:23.724176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:49.428 [2024-11-26 18:15:23.727144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.428 [2024-11-26 18:15:23.727201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.428 [2024-11-26 18:15:23.727224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.428 [2024-11-26 18:15:23.727256] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.428 [2024-11-26 18:15:23.727274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.428 [2024-11-26 18:15:23.727293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.428 [2024-11-26 18:15:23.727310] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.428 [2024-11-26 18:15:23.727328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.428 [2024-11-26 18:15:23.727343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.428 [2024-11-26 18:15:23.727362] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.428 [2024-11-26 18:15:23.727377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.428 [2024-11-26 18:15:23.727395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.428 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:49.428 18:15:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:49.685 [2024-11-26 18:15:24.124164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:49.685 [2024-11-26 18:15:24.127166] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.685 [2024-11-26 18:15:24.127247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.685 [2024-11-26 18:15:24.127276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.685 [2024-11-26 18:15:24.127302] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.685 [2024-11-26 18:15:24.127322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.685 [2024-11-26 18:15:24.127337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.685 [2024-11-26 18:15:24.127356] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.685 [2024-11-26 18:15:24.127370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.685 [2024-11-26 18:15:24.127387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.685 [2024-11-26 18:15:24.127402] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:49.685 [2024-11-26 18:15:24.127419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:49.685 [2024-11-26 18:15:24.127433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:49.943 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:49.943 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:49.943 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:49.943 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:49.943 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:49.943 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:49.943 18:15:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.943 18:15:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:49.943 18:15:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.943 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:49.943 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:50.200 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:50.200 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:50.200 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:50.200 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:50.200 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:50.200 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:50.200 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:50.200 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:50.200 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:50.200 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:50.200 18:15:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:02.395 18:15:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.395 18:15:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:02.395 18:15:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:02.395 [2024-11-26 18:15:36.724366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:02.395 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:14:02.395 EAL: Scan for (pci) bus failed. 00:14:02.395 [2024-11-26 18:15:36.727591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.395 [2024-11-26 18:15:36.727644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.395 [2024-11-26 18:15:36.727674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.395 [2024-11-26 18:15:36.727706] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.395 [2024-11-26 18:15:36.727728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.395 [2024-11-26 18:15:36.727751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.395 [2024-11-26 18:15:36.727770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.395 [2024-11-26 18:15:36.727789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.395 [2024-11-26 18:15:36.727804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:02.395 [2024-11-26 18:15:36.727823] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.395 [2024-11-26 18:15:36.727838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.395 [2024-11-26 18:15:36.727856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:02.395 18:15:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.395 18:15:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:02.395 18:15:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:02.395 18:15:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:02.961 [2024-11-26 18:15:37.124370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:02.961 [2024-11-26 18:15:37.127486] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.961 [2024-11-26 18:15:37.127595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.961 [2024-11-26 18:15:37.127623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.961 [2024-11-26 18:15:37.127669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.961 [2024-11-26 18:15:37.127689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.961 [2024-11-26 18:15:37.127705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.961 [2024-11-26 18:15:37.127726] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.961 [2024-11-26 18:15:37.127742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.961 [2024-11-26 18:15:37.127763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.961 [2024-11-26 18:15:37.127779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:02.961 [2024-11-26 18:15:37.127797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.961 [2024-11-26 18:15:37.127812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.961 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:02.961 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:02.961 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:02.961 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:02.961 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:02.961 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:02.961 18:15:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.961 18:15:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:02.961 18:15:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.961 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:02.961 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:03.219 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:03.219 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:03.219 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:03.219 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:03.219 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:03.219 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:03.219 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:03.219 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:03.219 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:03.219 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:03.219 18:15:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:15.413 18:15:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.413 18:15:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:15.413 18:15:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:15.413 18:15:49 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.65 00:14:15.413 18:15:49 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.65 00:14:15.413 18:15:49 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.65 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.65 2 00:14:15.413 remove_attach_helper took 45.65s to complete (handling 2 nvme drive(s)) 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:14:15.413 18:15:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.413 18:15:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:15.413 18:15:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:15.413 18:15:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.413 18:15:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:15.413 18:15:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:15.413 18:15:49 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:15.413 18:15:49 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:15.413 18:15:49 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:15.413 18:15:49 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:15.413 18:15:49 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:15.413 18:15:49 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:21.965 18:15:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:21.965 18:15:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:21.965 18:15:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:21.965 18:15:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:21.965 18:15:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:21.965 18:15:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:21.965 18:15:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:21.965 18:15:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:21.965 18:15:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:21.965 18:15:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:21.965 18:15:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:21.965 18:15:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.965 18:15:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:21.965 [2024-11-26 18:15:55.808689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:21.965 [2024-11-26 18:15:55.811292] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.965 [2024-11-26 18:15:55.811363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.965 [2024-11-26 18:15:55.811397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.965 [2024-11-26 18:15:55.811429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.965 [2024-11-26 18:15:55.811446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.965 [2024-11-26 18:15:55.811465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.965 [2024-11-26 18:15:55.811482] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.965 [2024-11-26 18:15:55.811500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.965 [2024-11-26 18:15:55.811515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.965 [2024-11-26 18:15:55.811534] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:21.965 [2024-11-26 18:15:55.811550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:21.965 [2024-11-26 18:15:55.811589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:21.965 18:15:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.965 18:15:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:21.965 18:15:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:21.965 18:15:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:21.965 18:15:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:21.965 18:15:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:21.965 18:15:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:21.965 18:15:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:21.965 18:15:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:21.965 18:15:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.965 18:15:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:21.965 18:15:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.965 18:15:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:21.965 18:15:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:22.223 [2024-11-26 18:15:56.508743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:22.223 [2024-11-26 18:15:56.510976] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:22.223 [2024-11-26 18:15:56.511060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:22.223 [2024-11-26 18:15:56.511087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.223 [2024-11-26 18:15:56.511114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:22.223 [2024-11-26 18:15:56.511135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:22.223 [2024-11-26 18:15:56.511150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.223 [2024-11-26 18:15:56.511177] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:22.223 [2024-11-26 18:15:56.511193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:22.223 [2024-11-26 18:15:56.511211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.223 [2024-11-26 18:15:56.511227] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:22.223 [2024-11-26 18:15:56.511245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:22.223 [2024-11-26 18:15:56.511260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:22.481 18:15:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:22.481 18:15:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:22.481 18:15:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:22.481 18:15:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:22.481 18:15:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:22.481 18:15:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:22.481 18:15:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.481 18:15:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:22.481 18:15:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.756 18:15:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:22.756 18:15:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:22.756 18:15:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:22.756 18:15:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:22.756 18:15:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:22.756 18:15:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:22.756 18:15:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:22.756 18:15:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:22.756 18:15:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:22.756 18:15:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:23.014 18:15:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:23.014 18:15:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:23.014 18:15:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:35.214 18:16:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.214 18:16:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:35.214 18:16:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:35.214 18:16:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.214 18:16:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:35.214 [2024-11-26 18:16:09.408941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:35.214 [2024-11-26 18:16:09.411242] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.214 [2024-11-26 18:16:09.411318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.214 [2024-11-26 18:16:09.411341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.214 [2024-11-26 18:16:09.411372] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.214 [2024-11-26 18:16:09.411388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.214 [2024-11-26 18:16:09.411407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.214 [2024-11-26 18:16:09.411425] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.214 [2024-11-26 18:16:09.411443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.214 [2024-11-26 18:16:09.411457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.214 [2024-11-26 18:16:09.411475] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.214 [2024-11-26 18:16:09.411490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.214 [2024-11-26 18:16:09.411508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.214 18:16:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:35.214 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:35.473 [2024-11-26 18:16:09.808935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:35.473 [2024-11-26 18:16:09.811269] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.473 [2024-11-26 18:16:09.811319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.473 [2024-11-26 18:16:09.811344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.473 [2024-11-26 18:16:09.811381] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.473 [2024-11-26 18:16:09.811406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.473 [2024-11-26 18:16:09.811422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.473 [2024-11-26 18:16:09.811473] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.473 [2024-11-26 18:16:09.811488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.473 [2024-11-26 18:16:09.811506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.473 [2024-11-26 18:16:09.811522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:35.473 [2024-11-26 18:16:09.811540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:35.473 [2024-11-26 18:16:09.811555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:35.730 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:35.730 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:35.730 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:35.730 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:35.730 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:35.730 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:35.730 18:16:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.730 18:16:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:35.730 18:16:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.730 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:35.730 18:16:09 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:35.730 18:16:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:35.730 18:16:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:35.730 18:16:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:35.730 18:16:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:35.988 18:16:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:35.988 18:16:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:35.988 18:16:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:35.988 18:16:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:35.988 18:16:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:35.988 18:16:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:35.988 18:16:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:48.236 18:16:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.236 18:16:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:48.236 18:16:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:48.236 18:16:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.236 18:16:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:48.236 [2024-11-26 18:16:22.409103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:48.236 [2024-11-26 18:16:22.414480] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:48.236 [2024-11-26 18:16:22.414543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.236 [2024-11-26 18:16:22.414581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.236 [2024-11-26 18:16:22.414621] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:48.236 [2024-11-26 18:16:22.414638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.236 [2024-11-26 18:16:22.414657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.236 [2024-11-26 18:16:22.414674] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:48.236 [2024-11-26 18:16:22.414696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.236 [2024-11-26 18:16:22.414712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.236 [2024-11-26 18:16:22.414732] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:48.236 [2024-11-26 18:16:22.414747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.236 [2024-11-26 18:16:22.414765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.236 18:16:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:48.236 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:48.495 [2024-11-26 18:16:22.809117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:48.495 [2024-11-26 18:16:22.811466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:48.495 [2024-11-26 18:16:22.811546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.495 [2024-11-26 18:16:22.811585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.495 [2024-11-26 18:16:22.811616] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:48.495 [2024-11-26 18:16:22.811636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.495 [2024-11-26 18:16:22.811652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.495 [2024-11-26 18:16:22.811689] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:48.495 [2024-11-26 18:16:22.811704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.495 [2024-11-26 18:16:22.811722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.495 [2024-11-26 18:16:22.811738] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:48.495 [2024-11-26 18:16:22.811760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.495 [2024-11-26 18:16:22.811775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.495 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:48.495 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:48.753 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:48.753 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:48.753 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:48.753 18:16:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:48.753 18:16:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.753 18:16:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:48.753 18:16:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.753 18:16:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:48.753 18:16:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:48.753 18:16:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:48.753 18:16:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:48.753 18:16:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:48.753 18:16:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:48.753 18:16:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:48.753 18:16:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:48.753 18:16:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:48.753 18:16:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:49.011 18:16:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:49.011 18:16:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:49.011 18:16:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:01.281 18:16:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:01.281 18:16:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:01.281 18:16:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:01.281 18:16:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:01.281 18:16:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:01.281 18:16:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:01.281 18:16:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.281 18:16:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:01.281 18:16:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.281 18:16:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:01.281 18:16:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:01.281 18:16:35 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.62 00:15:01.281 18:16:35 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.62 00:15:01.281 18:16:35 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:01.281 18:16:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.62 00:15:01.281 18:16:35 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.62 2 00:15:01.281 remove_attach_helper took 45.62s to complete (handling 2 nvme drive(s)) 18:16:35 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:15:01.281 18:16:35 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68826 00:15:01.281 18:16:35 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68826 ']' 00:15:01.281 18:16:35 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68826 00:15:01.281 18:16:35 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:15:01.281 18:16:35 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.281 18:16:35 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68826 00:15:01.281 18:16:35 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:01.281 18:16:35 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:01.281 killing process with pid 68826 00:15:01.281 18:16:35 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68826' 00:15:01.281 18:16:35 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68826 00:15:01.281 18:16:35 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68826 00:15:03.834 18:16:37 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:03.834 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:04.091 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:04.091 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:04.349 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:04.349 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:04.349 00:15:04.349 real 2m33.031s 00:15:04.349 user 1m53.485s 00:15:04.349 sys 0m19.220s 00:15:04.349 18:16:38 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.349 ************************************ 00:15:04.349 END TEST sw_hotplug 00:15:04.350 18:16:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:04.350 ************************************ 00:15:04.350 18:16:38 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:15:04.350 18:16:38 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:04.350 18:16:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:04.350 18:16:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:04.350 18:16:38 -- common/autotest_common.sh@10 -- # set +x 00:15:04.609 ************************************ 00:15:04.609 START TEST nvme_xnvme 00:15:04.609 ************************************ 00:15:04.609 18:16:38 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:04.609 * Looking for test storage... 00:15:04.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:04.609 18:16:38 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:04.609 18:16:38 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:04.609 18:16:38 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:04.609 18:16:39 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:04.609 18:16:39 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:15:04.609 18:16:39 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:04.609 18:16:39 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:04.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.609 --rc genhtml_branch_coverage=1 00:15:04.609 --rc genhtml_function_coverage=1 00:15:04.609 --rc genhtml_legend=1 00:15:04.609 --rc geninfo_all_blocks=1 00:15:04.609 --rc geninfo_unexecuted_blocks=1 00:15:04.609 00:15:04.609 ' 00:15:04.609 18:16:39 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:04.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.609 --rc genhtml_branch_coverage=1 00:15:04.609 --rc genhtml_function_coverage=1 00:15:04.609 --rc genhtml_legend=1 00:15:04.609 --rc geninfo_all_blocks=1 00:15:04.609 --rc geninfo_unexecuted_blocks=1 00:15:04.609 00:15:04.609 ' 00:15:04.609 18:16:39 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:04.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.609 --rc genhtml_branch_coverage=1 00:15:04.609 --rc genhtml_function_coverage=1 00:15:04.609 --rc genhtml_legend=1 00:15:04.609 --rc geninfo_all_blocks=1 00:15:04.609 --rc geninfo_unexecuted_blocks=1 00:15:04.609 00:15:04.609 ' 00:15:04.609 18:16:39 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:04.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.609 --rc genhtml_branch_coverage=1 00:15:04.609 --rc genhtml_function_coverage=1 00:15:04.609 --rc genhtml_legend=1 00:15:04.609 --rc geninfo_all_blocks=1 00:15:04.609 --rc geninfo_unexecuted_blocks=1 00:15:04.609 00:15:04.609 ' 00:15:04.610 18:16:39 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:15:04.610 18:16:39 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:15:04.610 18:16:39 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:04.610 18:16:39 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:15:04.610 18:16:39 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:04.610 18:16:39 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:04.610 18:16:39 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:04.610 18:16:39 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:15:04.610 18:16:39 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:15:04.610 18:16:39 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:15:04.610 18:16:39 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:15:04.610 18:16:39 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:04.610 18:16:39 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:04.610 18:16:39 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:15:04.610 18:16:39 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:15:04.610 18:16:39 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:15:04.610 18:16:39 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:15:04.610 18:16:39 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:15:04.610 18:16:39 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:15:04.610 18:16:39 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:04.610 18:16:39 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:04.610 18:16:39 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:04.610 18:16:39 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:04.610 18:16:39 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:04.610 18:16:39 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:04.610 18:16:39 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:15:04.610 18:16:39 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:04.610 #define SPDK_CONFIG_H 00:15:04.610 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:04.610 #define SPDK_CONFIG_APPS 1 00:15:04.610 #define SPDK_CONFIG_ARCH native 00:15:04.610 #define SPDK_CONFIG_ASAN 1 00:15:04.610 #undef SPDK_CONFIG_AVAHI 00:15:04.610 #undef SPDK_CONFIG_CET 00:15:04.610 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:04.610 #define SPDK_CONFIG_COVERAGE 1 00:15:04.610 #define SPDK_CONFIG_CROSS_PREFIX 00:15:04.610 #undef SPDK_CONFIG_CRYPTO 00:15:04.610 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:04.610 #undef SPDK_CONFIG_CUSTOMOCF 00:15:04.611 #undef SPDK_CONFIG_DAOS 00:15:04.611 #define SPDK_CONFIG_DAOS_DIR 00:15:04.611 #define SPDK_CONFIG_DEBUG 1 00:15:04.611 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:04.611 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:04.611 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:04.611 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:04.611 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:04.611 #undef SPDK_CONFIG_DPDK_UADK 00:15:04.611 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:04.611 #define SPDK_CONFIG_EXAMPLES 1 00:15:04.611 #undef SPDK_CONFIG_FC 00:15:04.611 #define SPDK_CONFIG_FC_PATH 00:15:04.611 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:04.611 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:04.611 #define SPDK_CONFIG_FSDEV 1 00:15:04.611 #undef SPDK_CONFIG_FUSE 00:15:04.611 #undef SPDK_CONFIG_FUZZER 00:15:04.611 #define SPDK_CONFIG_FUZZER_LIB 00:15:04.611 #undef SPDK_CONFIG_GOLANG 00:15:04.611 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:04.611 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:04.611 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:04.611 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:04.611 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:04.611 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:04.611 #undef SPDK_CONFIG_HAVE_LZ4 00:15:04.611 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:04.611 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:04.611 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:04.611 #define SPDK_CONFIG_IDXD 1 00:15:04.611 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:04.611 #undef SPDK_CONFIG_IPSEC_MB 00:15:04.611 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:04.611 #define SPDK_CONFIG_ISAL 1 00:15:04.611 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:04.611 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:04.611 #define SPDK_CONFIG_LIBDIR 00:15:04.611 #undef SPDK_CONFIG_LTO 00:15:04.611 #define SPDK_CONFIG_MAX_LCORES 128 00:15:04.611 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:15:04.611 #define SPDK_CONFIG_NVME_CUSE 1 00:15:04.611 #undef SPDK_CONFIG_OCF 00:15:04.611 #define SPDK_CONFIG_OCF_PATH 00:15:04.611 #define SPDK_CONFIG_OPENSSL_PATH 00:15:04.611 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:04.611 #define SPDK_CONFIG_PGO_DIR 00:15:04.611 #undef SPDK_CONFIG_PGO_USE 00:15:04.611 #define SPDK_CONFIG_PREFIX /usr/local 00:15:04.611 #undef SPDK_CONFIG_RAID5F 00:15:04.611 #undef SPDK_CONFIG_RBD 00:15:04.611 #define SPDK_CONFIG_RDMA 1 00:15:04.611 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:04.611 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:04.611 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:04.611 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:04.611 #define SPDK_CONFIG_SHARED 1 00:15:04.611 #undef SPDK_CONFIG_SMA 00:15:04.611 #define SPDK_CONFIG_TESTS 1 00:15:04.611 #undef SPDK_CONFIG_TSAN 00:15:04.611 #define SPDK_CONFIG_UBLK 1 00:15:04.611 #define SPDK_CONFIG_UBSAN 1 00:15:04.611 #undef SPDK_CONFIG_UNIT_TESTS 00:15:04.611 #undef SPDK_CONFIG_URING 00:15:04.611 #define SPDK_CONFIG_URING_PATH 00:15:04.611 #undef SPDK_CONFIG_URING_ZNS 00:15:04.611 #undef SPDK_CONFIG_USDT 00:15:04.611 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:04.611 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:04.611 #undef SPDK_CONFIG_VFIO_USER 00:15:04.611 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:04.611 #define SPDK_CONFIG_VHOST 1 00:15:04.611 #define SPDK_CONFIG_VIRTIO 1 00:15:04.611 #undef SPDK_CONFIG_VTUNE 00:15:04.611 #define SPDK_CONFIG_VTUNE_DIR 00:15:04.611 #define SPDK_CONFIG_WERROR 1 00:15:04.611 #define SPDK_CONFIG_WPDK_DIR 00:15:04.611 #define SPDK_CONFIG_XNVME 1 00:15:04.611 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:04.611 18:16:39 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:04.611 18:16:39 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:04.611 18:16:39 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:15:04.611 18:16:39 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.611 18:16:39 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.611 18:16:39 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.611 18:16:39 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.611 18:16:39 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.611 18:16:39 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.611 18:16:39 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:04.611 18:16:39 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.611 18:16:39 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:04.611 18:16:39 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:04.611 18:16:39 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@68 -- # uname -s 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:15:04.872 18:16:39 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:15:04.872 18:16:39 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:15:04.872 18:16:39 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:04.872 18:16:39 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:15:04.872 18:16:39 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:04.872 18:16:39 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:15:04.872 18:16:39 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:04.872 18:16:39 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:15:04.872 18:16:39 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:04.872 18:16:39 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:15:04.872 18:16:39 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:04.872 18:16:39 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:15:04.873 18:16:39 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:15:04.874 18:16:39 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70188 ]] 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70188 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.VJniBT 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.VJniBT/tests/xnvme /tmp/spdk.VJniBT 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975781376 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591941120 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:04.875 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261657600 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975781376 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5591941120 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266273792 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266421248 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=94058872832 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5643907072 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:15:04.876 * Looking for test storage... 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:15:04.876 18:16:39 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13975781376 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:04.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:04.877 18:16:39 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:04.877 18:16:39 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:04.878 18:16:39 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:04.878 18:16:39 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:04.878 18:16:39 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:04.878 18:16:39 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:04.878 18:16:39 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:04.878 18:16:39 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:04.878 18:16:39 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:04.878 18:16:39 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:04.878 18:16:39 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:15:04.878 18:16:39 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:04.878 18:16:39 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:04.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.878 --rc genhtml_branch_coverage=1 00:15:04.878 --rc genhtml_function_coverage=1 00:15:04.878 --rc genhtml_legend=1 00:15:04.878 --rc geninfo_all_blocks=1 00:15:04.878 --rc geninfo_unexecuted_blocks=1 00:15:04.878 00:15:04.878 ' 00:15:04.878 18:16:39 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:04.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.878 --rc genhtml_branch_coverage=1 00:15:04.878 --rc genhtml_function_coverage=1 00:15:04.878 --rc genhtml_legend=1 00:15:04.878 --rc geninfo_all_blocks=1 00:15:04.878 --rc geninfo_unexecuted_blocks=1 00:15:04.878 00:15:04.878 ' 00:15:04.878 18:16:39 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:04.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.878 --rc genhtml_branch_coverage=1 00:15:04.878 --rc genhtml_function_coverage=1 00:15:04.878 --rc genhtml_legend=1 00:15:04.878 --rc geninfo_all_blocks=1 00:15:04.878 --rc geninfo_unexecuted_blocks=1 00:15:04.878 00:15:04.878 ' 00:15:04.878 18:16:39 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:04.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:04.878 --rc genhtml_branch_coverage=1 00:15:04.878 --rc genhtml_function_coverage=1 00:15:04.878 --rc genhtml_legend=1 00:15:04.878 --rc geninfo_all_blocks=1 00:15:04.878 --rc geninfo_unexecuted_blocks=1 00:15:04.878 00:15:04.878 ' 00:15:04.878 18:16:39 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:04.878 18:16:39 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:15:04.878 18:16:39 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.878 18:16:39 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.878 18:16:39 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.879 18:16:39 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.879 18:16:39 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.879 18:16:39 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.879 18:16:39 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:04.879 18:16:39 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:15:04.879 18:16:39 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:05.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:05.446 Waiting for block devices as requested 00:15:05.446 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:05.704 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:05.704 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:05.704 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:10.968 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:10.968 18:16:45 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:15:11.227 18:16:45 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:15:11.227 18:16:45 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:15:11.485 18:16:45 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:15:11.485 18:16:45 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:15:11.485 18:16:45 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:15:11.485 18:16:45 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:15:11.485 18:16:45 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:15:11.485 No valid GPT data, bailing 00:15:11.485 18:16:45 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:11.485 18:16:45 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:15:11.485 18:16:45 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:15:11.485 18:16:45 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:15:11.485 18:16:45 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:15:11.485 18:16:45 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:15:11.485 18:16:45 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:15:11.485 18:16:45 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:15:11.485 18:16:45 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:11.485 18:16:45 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:11.485 18:16:45 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:15:11.485 18:16:45 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:15:11.485 18:16:45 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:11.485 18:16:45 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:11.485 18:16:45 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:11.485 18:16:45 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:11.485 18:16:45 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:11.485 18:16:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:11.485 18:16:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:11.485 18:16:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:11.485 ************************************ 00:15:11.485 START TEST xnvme_rpc 00:15:11.485 ************************************ 00:15:11.485 18:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:11.485 18:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:11.485 18:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:11.485 18:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:11.485 18:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:11.485 18:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70578 00:15:11.485 18:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70578 00:15:11.485 18:16:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:11.485 18:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70578 ']' 00:15:11.485 18:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.485 18:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.485 18:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.485 18:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.485 18:16:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.743 [2024-11-26 18:16:46.038462] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:15:11.743 [2024-11-26 18:16:46.038707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70578 ] 00:15:12.001 [2024-11-26 18:16:46.234297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.001 [2024-11-26 18:16:46.426364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.936 xnvme_bdev 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:12.936 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70578 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70578 ']' 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70578 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70578 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:13.201 killing process with pid 70578 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70578' 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70578 00:15:13.201 18:16:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70578 00:15:15.730 00:15:15.730 real 0m3.918s 00:15:15.730 user 0m3.995s 00:15:15.730 sys 0m0.628s 00:15:15.730 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.730 18:16:49 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.730 ************************************ 00:15:15.730 END TEST xnvme_rpc 00:15:15.730 ************************************ 00:15:15.730 18:16:49 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:15.730 18:16:49 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:15.730 18:16:49 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.730 18:16:49 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.730 ************************************ 00:15:15.730 START TEST xnvme_bdevperf 00:15:15.730 ************************************ 00:15:15.730 18:16:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:15.730 18:16:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:15.730 18:16:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:15:15.730 18:16:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:15.730 18:16:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:15.730 18:16:49 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:15.730 18:16:49 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:15.730 18:16:49 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:15.730 { 00:15:15.730 "subsystems": [ 00:15:15.730 { 00:15:15.730 "subsystem": "bdev", 00:15:15.730 "config": [ 00:15:15.730 { 00:15:15.730 "params": { 00:15:15.730 "io_mechanism": "libaio", 00:15:15.730 "conserve_cpu": false, 00:15:15.730 "filename": "/dev/nvme0n1", 00:15:15.730 "name": "xnvme_bdev" 00:15:15.730 }, 00:15:15.730 "method": "bdev_xnvme_create" 00:15:15.730 }, 00:15:15.730 { 00:15:15.730 "method": "bdev_wait_for_examine" 00:15:15.730 } 00:15:15.730 ] 00:15:15.730 } 00:15:15.730 ] 00:15:15.730 } 00:15:15.730 [2024-11-26 18:16:50.003887] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:15:15.730 [2024-11-26 18:16:50.004091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70663 ] 00:15:15.988 [2024-11-26 18:16:50.196271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.988 [2024-11-26 18:16:50.349387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.554 Running I/O for 5 seconds... 00:15:18.463 28646.00 IOPS, 111.90 MiB/s [2024-11-26T18:16:53.859Z] 27531.00 IOPS, 107.54 MiB/s [2024-11-26T18:16:54.793Z] 27264.67 IOPS, 106.50 MiB/s [2024-11-26T18:16:56.168Z] 26828.00 IOPS, 104.80 MiB/s 00:15:21.707 Latency(us) 00:15:21.707 [2024-11-26T18:16:56.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.707 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:21.707 xnvme_bdev : 5.01 26444.50 103.30 0.00 0.00 2414.29 130.33 31457.28 00:15:21.707 [2024-11-26T18:16:56.168Z] =================================================================================================================== 00:15:21.707 [2024-11-26T18:16:56.168Z] Total : 26444.50 103.30 0.00 0.00 2414.29 130.33 31457.28 00:15:22.642 18:16:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:22.642 18:16:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:22.642 18:16:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:22.642 18:16:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:22.642 18:16:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:22.642 { 00:15:22.642 "subsystems": [ 00:15:22.642 { 00:15:22.642 "subsystem": "bdev", 00:15:22.642 "config": [ 00:15:22.642 { 00:15:22.642 "params": { 00:15:22.642 "io_mechanism": "libaio", 00:15:22.642 "conserve_cpu": false, 00:15:22.642 "filename": "/dev/nvme0n1", 00:15:22.642 "name": "xnvme_bdev" 00:15:22.642 }, 00:15:22.642 "method": "bdev_xnvme_create" 00:15:22.642 }, 00:15:22.642 { 00:15:22.642 "method": "bdev_wait_for_examine" 00:15:22.642 } 00:15:22.642 ] 00:15:22.642 } 00:15:22.642 ] 00:15:22.642 } 00:15:22.642 [2024-11-26 18:16:56.981267] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:15:22.642 [2024-11-26 18:16:56.981800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70738 ] 00:15:22.900 [2024-11-26 18:16:57.166144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.900 [2024-11-26 18:16:57.293940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.500 Running I/O for 5 seconds... 00:15:25.368 27708.00 IOPS, 108.23 MiB/s [2024-11-26T18:17:00.764Z] 26922.00 IOPS, 105.16 MiB/s [2024-11-26T18:17:01.700Z] 25732.00 IOPS, 100.52 MiB/s [2024-11-26T18:17:02.767Z] 25410.75 IOPS, 99.26 MiB/s [2024-11-26T18:17:02.767Z] 25172.00 IOPS, 98.33 MiB/s 00:15:28.306 Latency(us) 00:15:28.306 [2024-11-26T18:17:02.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.306 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:28.306 xnvme_bdev : 5.01 25152.97 98.25 0.00 0.00 2538.23 610.68 5868.45 00:15:28.306 [2024-11-26T18:17:02.767Z] =================================================================================================================== 00:15:28.306 [2024-11-26T18:17:02.767Z] Total : 25152.97 98.25 0.00 0.00 2538.23 610.68 5868.45 00:15:29.739 00:15:29.739 real 0m13.910s 00:15:29.739 user 0m5.321s 00:15:29.739 sys 0m6.158s 00:15:29.739 18:17:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.739 18:17:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:29.739 ************************************ 00:15:29.739 END TEST xnvme_bdevperf 00:15:29.740 ************************************ 00:15:29.740 18:17:03 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:29.740 18:17:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:29.740 18:17:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.740 18:17:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.740 ************************************ 00:15:29.740 START TEST xnvme_fio_plugin 00:15:29.740 ************************************ 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:29.740 18:17:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:29.740 { 00:15:29.740 "subsystems": [ 00:15:29.740 { 00:15:29.740 "subsystem": "bdev", 00:15:29.740 "config": [ 00:15:29.740 { 00:15:29.740 "params": { 00:15:29.740 "io_mechanism": "libaio", 00:15:29.740 "conserve_cpu": false, 00:15:29.740 "filename": "/dev/nvme0n1", 00:15:29.740 "name": "xnvme_bdev" 00:15:29.740 }, 00:15:29.740 "method": "bdev_xnvme_create" 00:15:29.740 }, 00:15:29.740 { 00:15:29.740 "method": "bdev_wait_for_examine" 00:15:29.740 } 00:15:29.740 ] 00:15:29.740 } 00:15:29.740 ] 00:15:29.740 } 00:15:29.740 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:29.740 fio-3.35 00:15:29.740 Starting 1 thread 00:15:36.297 00:15:36.297 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70863: Tue Nov 26 18:17:09 2024 00:15:36.297 read: IOPS=22.4k, BW=87.5MiB/s (91.8MB/s)(438MiB/5001msec) 00:15:36.297 slat (usec): min=5, max=1711, avg=40.20, stdev=26.69 00:15:36.297 clat (usec): min=102, max=5562, avg=1569.44, stdev=826.67 00:15:36.297 lat (usec): min=180, max=5685, avg=1609.64, stdev=828.40 00:15:36.297 clat percentiles (usec): 00:15:36.297 | 1.00th=[ 262], 5.00th=[ 392], 10.00th=[ 519], 20.00th=[ 766], 00:15:36.297 | 30.00th=[ 1004], 40.00th=[ 1254], 50.00th=[ 1500], 60.00th=[ 1762], 00:15:36.297 | 70.00th=[ 2040], 80.00th=[ 2343], 90.00th=[ 2704], 95.00th=[ 2966], 00:15:36.297 | 99.00th=[ 3556], 99.50th=[ 3884], 99.90th=[ 4490], 99.95th=[ 4686], 00:15:36.297 | 99.99th=[ 5014] 00:15:36.297 bw ( KiB/s): min=81632, max=108008, per=100.00%, avg=90537.78, stdev=8547.10, samples=9 00:15:36.297 iops : min=20408, max=27002, avg=22634.44, stdev=2136.78, samples=9 00:15:36.297 lat (usec) : 250=0.79%, 500=8.49%, 750=10.19%, 1000=10.37% 00:15:36.297 lat (msec) : 2=39.04%, 4=30.73%, 10=0.38% 00:15:36.297 cpu : usr=24.54%, sys=52.86%, ctx=72, majf=0, minf=671 00:15:36.297 IO depths : 1=0.1%, 2=1.2%, 4=5.5%, 8=12.7%, 16=26.1%, 32=52.7%, >=64=1.6% 00:15:36.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.297 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:15:36.297 issued rwts: total=112030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.297 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:36.297 00:15:36.297 Run status group 0 (all jobs): 00:15:36.297 READ: bw=87.5MiB/s (91.8MB/s), 87.5MiB/s-87.5MiB/s (91.8MB/s-91.8MB/s), io=438MiB (459MB), run=5001-5001msec 00:15:36.863 ----------------------------------------------------- 00:15:36.863 Suppressions used: 00:15:36.863 count bytes template 00:15:36.863 1 11 /usr/src/fio/parse.c 00:15:36.863 1 8 libtcmalloc_minimal.so 00:15:36.863 1 904 libcrypto.so 00:15:36.863 ----------------------------------------------------- 00:15:36.863 00:15:36.863 18:17:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:36.863 18:17:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:36.863 18:17:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:36.863 18:17:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:36.863 18:17:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:36.863 18:17:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:36.863 18:17:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:36.863 18:17:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:36.863 18:17:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:36.863 18:17:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:36.863 18:17:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:36.864 18:17:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:36.864 18:17:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:36.864 18:17:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:36.864 18:17:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:36.864 18:17:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:36.864 18:17:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:36.864 18:17:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:36.864 18:17:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:36.864 18:17:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:36.864 18:17:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:37.122 { 00:15:37.122 "subsystems": [ 00:15:37.122 { 00:15:37.122 "subsystem": "bdev", 00:15:37.122 "config": [ 00:15:37.122 { 00:15:37.122 "params": { 00:15:37.122 "io_mechanism": "libaio", 00:15:37.122 "conserve_cpu": false, 00:15:37.122 "filename": "/dev/nvme0n1", 00:15:37.122 "name": "xnvme_bdev" 00:15:37.122 }, 00:15:37.122 "method": "bdev_xnvme_create" 00:15:37.122 }, 00:15:37.122 { 00:15:37.122 "method": "bdev_wait_for_examine" 00:15:37.122 } 00:15:37.122 ] 00:15:37.122 } 00:15:37.122 ] 00:15:37.122 } 00:15:37.122 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:37.122 fio-3.35 00:15:37.122 Starting 1 thread 00:15:43.679 00:15:43.679 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70960: Tue Nov 26 18:17:17 2024 00:15:43.679 write: IOPS=23.2k, BW=90.6MiB/s (95.0MB/s)(453MiB/5001msec); 0 zone resets 00:15:43.679 slat (usec): min=5, max=2929, avg=38.62, stdev=30.02 00:15:43.679 clat (usec): min=95, max=6349, avg=1516.07, stdev=809.72 00:15:43.679 lat (usec): min=176, max=6417, avg=1554.69, stdev=811.71 00:15:43.679 clat percentiles (usec): 00:15:43.679 | 1.00th=[ 258], 5.00th=[ 375], 10.00th=[ 502], 20.00th=[ 742], 00:15:43.679 | 30.00th=[ 971], 40.00th=[ 1188], 50.00th=[ 1434], 60.00th=[ 1680], 00:15:43.679 | 70.00th=[ 1958], 80.00th=[ 2245], 90.00th=[ 2606], 95.00th=[ 2868], 00:15:43.679 | 99.00th=[ 3654], 99.50th=[ 3982], 99.90th=[ 4555], 99.95th=[ 4752], 00:15:43.679 | 99.99th=[ 5800] 00:15:43.679 bw ( KiB/s): min=83792, max=116400, per=100.00%, avg=93648.89, stdev=10262.36, samples=9 00:15:43.679 iops : min=20948, max=29100, avg=23412.22, stdev=2565.59, samples=9 00:15:43.679 lat (usec) : 100=0.01%, 250=0.82%, 500=9.15%, 750=10.36%, 1000=10.90% 00:15:43.679 lat (msec) : 2=40.47%, 4=27.82%, 10=0.47% 00:15:43.679 cpu : usr=24.26%, sys=54.32%, ctx=84, majf=0, minf=671 00:15:43.679 IO depths : 1=0.1%, 2=1.6%, 4=5.6%, 8=12.4%, 16=25.8%, 32=52.8%, >=64=1.7% 00:15:43.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.679 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:15:43.679 issued rwts: total=0,116029,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.679 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:43.679 00:15:43.679 Run status group 0 (all jobs): 00:15:43.679 WRITE: bw=90.6MiB/s (95.0MB/s), 90.6MiB/s-90.6MiB/s (95.0MB/s-95.0MB/s), io=453MiB (475MB), run=5001-5001msec 00:15:44.611 ----------------------------------------------------- 00:15:44.611 Suppressions used: 00:15:44.611 count bytes template 00:15:44.611 1 11 /usr/src/fio/parse.c 00:15:44.611 1 8 libtcmalloc_minimal.so 00:15:44.611 1 904 libcrypto.so 00:15:44.611 ----------------------------------------------------- 00:15:44.611 00:15:44.611 00:15:44.611 real 0m14.938s 00:15:44.611 user 0m6.185s 00:15:44.611 sys 0m6.166s 00:15:44.611 18:17:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.611 18:17:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:44.611 ************************************ 00:15:44.611 END TEST xnvme_fio_plugin 00:15:44.611 ************************************ 00:15:44.611 18:17:18 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:44.611 18:17:18 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:44.612 18:17:18 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:44.612 18:17:18 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:44.612 18:17:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:44.612 18:17:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.612 18:17:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:44.612 ************************************ 00:15:44.612 START TEST xnvme_rpc 00:15:44.612 ************************************ 00:15:44.612 18:17:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:44.612 18:17:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:44.612 18:17:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:44.612 18:17:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:44.612 18:17:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:44.612 18:17:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71050 00:15:44.612 18:17:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71050 00:15:44.612 18:17:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:44.612 18:17:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71050 ']' 00:15:44.612 18:17:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.612 18:17:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.612 18:17:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.612 18:17:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.612 18:17:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.612 [2024-11-26 18:17:18.940500] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:15:44.612 [2024-11-26 18:17:18.940745] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71050 ] 00:15:44.869 [2024-11-26 18:17:19.116716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.869 [2024-11-26 18:17:19.245205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.802 xnvme_bdev 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:45.802 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:45.803 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:45.803 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.803 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.803 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71050 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71050 ']' 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71050 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71050 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:46.061 killing process with pid 71050 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71050' 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71050 00:15:46.061 18:17:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71050 00:15:48.590 00:15:48.590 real 0m3.800s 00:15:48.590 user 0m3.936s 00:15:48.590 sys 0m0.591s 00:15:48.590 18:17:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.590 ************************************ 00:15:48.590 END TEST xnvme_rpc 00:15:48.590 ************************************ 00:15:48.590 18:17:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:48.590 18:17:22 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:48.590 18:17:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:48.590 18:17:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.590 18:17:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:48.590 ************************************ 00:15:48.590 START TEST xnvme_bdevperf 00:15:48.590 ************************************ 00:15:48.590 18:17:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:48.590 18:17:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:48.590 18:17:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:15:48.590 18:17:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:48.590 18:17:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:48.590 18:17:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:48.590 18:17:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:48.590 18:17:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:48.590 { 00:15:48.590 "subsystems": [ 00:15:48.590 { 00:15:48.590 "subsystem": "bdev", 00:15:48.590 "config": [ 00:15:48.590 { 00:15:48.590 "params": { 00:15:48.590 "io_mechanism": "libaio", 00:15:48.590 "conserve_cpu": true, 00:15:48.590 "filename": "/dev/nvme0n1", 00:15:48.590 "name": "xnvme_bdev" 00:15:48.590 }, 00:15:48.590 "method": "bdev_xnvme_create" 00:15:48.590 }, 00:15:48.590 { 00:15:48.590 "method": "bdev_wait_for_examine" 00:15:48.590 } 00:15:48.590 ] 00:15:48.590 } 00:15:48.590 ] 00:15:48.590 } 00:15:48.590 [2024-11-26 18:17:22.795479] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:15:48.590 [2024-11-26 18:17:22.795691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71135 ] 00:15:48.590 [2024-11-26 18:17:22.986855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.849 [2024-11-26 18:17:23.144377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.107 Running I/O for 5 seconds... 00:15:51.415 26768.00 IOPS, 104.56 MiB/s [2024-11-26T18:17:26.812Z] 27060.50 IOPS, 105.71 MiB/s [2024-11-26T18:17:27.747Z] 27426.67 IOPS, 107.14 MiB/s [2024-11-26T18:17:28.681Z] 27689.25 IOPS, 108.16 MiB/s 00:15:54.220 Latency(us) 00:15:54.220 [2024-11-26T18:17:28.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.220 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:54.220 xnvme_bdev : 5.00 27947.33 109.17 0.00 0.00 2284.41 220.63 5362.04 00:15:54.220 [2024-11-26T18:17:28.681Z] =================================================================================================================== 00:15:54.220 [2024-11-26T18:17:28.681Z] Total : 27947.33 109.17 0.00 0.00 2284.41 220.63 5362.04 00:15:55.153 18:17:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:55.153 18:17:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:55.153 18:17:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:55.153 18:17:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:55.153 18:17:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:55.412 { 00:15:55.412 "subsystems": [ 00:15:55.412 { 00:15:55.412 "subsystem": "bdev", 00:15:55.412 "config": [ 00:15:55.412 { 00:15:55.412 "params": { 00:15:55.412 "io_mechanism": "libaio", 00:15:55.412 "conserve_cpu": true, 00:15:55.412 "filename": "/dev/nvme0n1", 00:15:55.412 "name": "xnvme_bdev" 00:15:55.412 }, 00:15:55.412 "method": "bdev_xnvme_create" 00:15:55.412 }, 00:15:55.412 { 00:15:55.412 "method": "bdev_wait_for_examine" 00:15:55.412 } 00:15:55.412 ] 00:15:55.412 } 00:15:55.412 ] 00:15:55.412 } 00:15:55.412 [2024-11-26 18:17:29.701714] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:15:55.412 [2024-11-26 18:17:29.701904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71210 ] 00:15:55.669 [2024-11-26 18:17:29.889229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.669 [2024-11-26 18:17:30.015526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.928 Running I/O for 5 seconds... 00:15:58.237 22968.00 IOPS, 89.72 MiB/s [2024-11-26T18:17:33.631Z] 23624.00 IOPS, 92.28 MiB/s [2024-11-26T18:17:34.565Z] 24423.33 IOPS, 95.40 MiB/s [2024-11-26T18:17:35.501Z] 24028.75 IOPS, 93.86 MiB/s 00:16:01.040 Latency(us) 00:16:01.040 [2024-11-26T18:17:35.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.040 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:01.040 xnvme_bdev : 5.00 23967.92 93.62 0.00 0.00 2663.29 69.35 9055.88 00:16:01.040 [2024-11-26T18:17:35.501Z] =================================================================================================================== 00:16:01.040 [2024-11-26T18:17:35.501Z] Total : 23967.92 93.62 0.00 0.00 2663.29 69.35 9055.88 00:16:02.421 00:16:02.421 real 0m13.810s 00:16:02.421 user 0m5.301s 00:16:02.421 sys 0m6.070s 00:16:02.421 18:17:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:02.421 ************************************ 00:16:02.421 END TEST xnvme_bdevperf 00:16:02.421 ************************************ 00:16:02.421 18:17:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:02.421 18:17:36 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:02.421 18:17:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:02.421 18:17:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.421 18:17:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:02.421 ************************************ 00:16:02.421 START TEST xnvme_fio_plugin 00:16:02.421 ************************************ 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:02.421 18:17:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:02.421 { 00:16:02.421 "subsystems": [ 00:16:02.421 { 00:16:02.421 "subsystem": "bdev", 00:16:02.421 "config": [ 00:16:02.421 { 00:16:02.421 "params": { 00:16:02.421 "io_mechanism": "libaio", 00:16:02.421 "conserve_cpu": true, 00:16:02.421 "filename": "/dev/nvme0n1", 00:16:02.421 "name": "xnvme_bdev" 00:16:02.421 }, 00:16:02.421 "method": "bdev_xnvme_create" 00:16:02.421 }, 00:16:02.421 { 00:16:02.421 "method": "bdev_wait_for_examine" 00:16:02.421 } 00:16:02.421 ] 00:16:02.421 } 00:16:02.421 ] 00:16:02.421 } 00:16:02.421 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:02.421 fio-3.35 00:16:02.421 Starting 1 thread 00:16:08.984 00:16:08.984 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71332: Tue Nov 26 18:17:42 2024 00:16:08.984 read: IOPS=25.8k, BW=101MiB/s (106MB/s)(504MiB/5001msec) 00:16:08.984 slat (usec): min=5, max=629, avg=34.74, stdev=27.08 00:16:08.984 clat (usec): min=106, max=7634, avg=1365.00, stdev=757.57 00:16:08.984 lat (usec): min=164, max=7712, avg=1399.74, stdev=760.19 00:16:08.984 clat percentiles (usec): 00:16:08.984 | 1.00th=[ 231], 5.00th=[ 338], 10.00th=[ 445], 20.00th=[ 652], 00:16:08.984 | 30.00th=[ 865], 40.00th=[ 1074], 50.00th=[ 1270], 60.00th=[ 1483], 00:16:08.984 | 70.00th=[ 1713], 80.00th=[ 2008], 90.00th=[ 2409], 95.00th=[ 2704], 00:16:08.984 | 99.00th=[ 3458], 99.50th=[ 3884], 99.90th=[ 4555], 99.95th=[ 4817], 00:16:08.984 | 99.99th=[ 5669] 00:16:08.984 bw ( KiB/s): min=96383, max=115288, per=99.19%, avg=102455.89, stdev=6328.99, samples=9 00:16:08.984 iops : min=24095, max=28822, avg=25613.89, stdev=1582.34, samples=9 00:16:08.984 lat (usec) : 250=1.56%, 500=11.02%, 750=11.94%, 1000=12.06% 00:16:08.984 lat (msec) : 2=43.43%, 4=19.61%, 10=0.38% 00:16:08.984 cpu : usr=24.10%, sys=53.98%, ctx=158, majf=0, minf=680 00:16:08.984 IO depths : 1=0.1%, 2=1.4%, 4=5.2%, 8=12.4%, 16=26.1%, 32=53.1%, >=64=1.7% 00:16:08.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.984 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:16:08.984 issued rwts: total=129138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.984 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:08.984 00:16:08.984 Run status group 0 (all jobs): 00:16:08.984 READ: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=504MiB (529MB), run=5001-5001msec 00:16:09.552 ----------------------------------------------------- 00:16:09.552 Suppressions used: 00:16:09.552 count bytes template 00:16:09.552 1 11 /usr/src/fio/parse.c 00:16:09.552 1 8 libtcmalloc_minimal.so 00:16:09.552 1 904 libcrypto.so 00:16:09.552 ----------------------------------------------------- 00:16:09.552 00:16:09.552 18:17:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:09.552 18:17:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:09.552 18:17:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:09.552 18:17:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:09.552 18:17:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:09.552 18:17:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:09.552 18:17:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:09.552 18:17:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:09.552 18:17:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:09.552 18:17:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:09.552 18:17:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:09.552 18:17:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:09.552 18:17:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:09.552 18:17:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:09.552 18:17:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:09.552 18:17:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:09.810 18:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:09.810 18:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:09.810 18:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:09.810 18:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:09.810 18:17:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:09.810 { 00:16:09.810 "subsystems": [ 00:16:09.810 { 00:16:09.810 "subsystem": "bdev", 00:16:09.810 "config": [ 00:16:09.810 { 00:16:09.810 "params": { 00:16:09.810 "io_mechanism": "libaio", 00:16:09.810 "conserve_cpu": true, 00:16:09.810 "filename": "/dev/nvme0n1", 00:16:09.810 "name": "xnvme_bdev" 00:16:09.810 }, 00:16:09.810 "method": "bdev_xnvme_create" 00:16:09.810 }, 00:16:09.810 { 00:16:09.810 "method": "bdev_wait_for_examine" 00:16:09.810 } 00:16:09.810 ] 00:16:09.810 } 00:16:09.810 ] 00:16:09.810 } 00:16:09.810 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:09.810 fio-3.35 00:16:09.810 Starting 1 thread 00:16:16.429 00:16:16.429 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71429: Tue Nov 26 18:17:50 2024 00:16:16.429 write: IOPS=24.3k, BW=94.9MiB/s (99.5MB/s)(474MiB/5001msec); 0 zone resets 00:16:16.429 slat (usec): min=5, max=507, avg=36.94, stdev=28.09 00:16:16.429 clat (usec): min=112, max=7193, avg=1451.01, stdev=786.31 00:16:16.429 lat (usec): min=165, max=7199, avg=1487.95, stdev=788.33 00:16:16.429 clat percentiles (usec): 00:16:16.429 | 1.00th=[ 247], 5.00th=[ 359], 10.00th=[ 474], 20.00th=[ 701], 00:16:16.429 | 30.00th=[ 922], 40.00th=[ 1139], 50.00th=[ 1369], 60.00th=[ 1598], 00:16:16.429 | 70.00th=[ 1860], 80.00th=[ 2147], 90.00th=[ 2540], 95.00th=[ 2802], 00:16:16.429 | 99.00th=[ 3425], 99.50th=[ 3785], 99.90th=[ 4686], 99.95th=[ 5080], 00:16:16.429 | 99.99th=[ 6980] 00:16:16.429 bw ( KiB/s): min=89288, max=113784, per=100.00%, avg=98491.56, stdev=8640.49, samples=9 00:16:16.429 iops : min=22322, max=28446, avg=24622.89, stdev=2160.12, samples=9 00:16:16.429 lat (usec) : 250=1.09%, 500=10.08%, 750=10.99%, 1000=11.34% 00:16:16.429 lat (msec) : 2=41.45%, 4=24.69%, 10=0.35% 00:16:16.430 cpu : usr=24.60%, sys=53.96%, ctx=98, majf=0, minf=730 00:16:16.430 IO depths : 1=0.1%, 2=1.6%, 4=5.6%, 8=12.5%, 16=26.0%, 32=52.6%, >=64=1.7% 00:16:16.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.430 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:16:16.430 issued rwts: total=0,121451,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.430 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:16.430 00:16:16.430 Run status group 0 (all jobs): 00:16:16.430 WRITE: bw=94.9MiB/s (99.5MB/s), 94.9MiB/s-94.9MiB/s (99.5MB/s-99.5MB/s), io=474MiB (497MB), run=5001-5001msec 00:16:16.996 ----------------------------------------------------- 00:16:16.996 Suppressions used: 00:16:16.996 count bytes template 00:16:16.996 1 11 /usr/src/fio/parse.c 00:16:16.996 1 8 libtcmalloc_minimal.so 00:16:16.996 1 904 libcrypto.so 00:16:16.996 ----------------------------------------------------- 00:16:16.996 00:16:16.996 00:16:16.996 real 0m14.920s 00:16:16.996 user 0m6.213s 00:16:16.996 sys 0m6.195s 00:16:16.996 18:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.996 ************************************ 00:16:16.996 END TEST xnvme_fio_plugin 00:16:16.996 ************************************ 00:16:16.996 18:17:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:17.255 18:17:51 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:17.255 18:17:51 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:17.255 18:17:51 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:16:17.255 18:17:51 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:16:17.255 18:17:51 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:17.255 18:17:51 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:17.255 18:17:51 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:17.255 18:17:51 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:17.255 18:17:51 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:17.255 18:17:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:17.255 18:17:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.255 18:17:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:17.255 ************************************ 00:16:17.255 START TEST xnvme_rpc 00:16:17.255 ************************************ 00:16:17.255 18:17:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:17.255 18:17:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:17.255 18:17:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:17.255 18:17:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:17.255 18:17:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:17.255 18:17:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71520 00:16:17.255 18:17:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71520 00:16:17.255 18:17:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:17.255 18:17:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71520 ']' 00:16:17.255 18:17:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.255 18:17:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.255 18:17:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.255 18:17:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.255 18:17:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.255 [2024-11-26 18:17:51.656835] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:16:17.255 [2024-11-26 18:17:51.658392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71520 ] 00:16:17.513 [2024-11-26 18:17:51.852926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.771 [2024-11-26 18:17:51.980192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.704 xnvme_bdev 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.704 18:17:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:18.704 18:17:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.704 18:17:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:16:18.704 18:17:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71520 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71520 ']' 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71520 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71520 00:16:18.705 killing process with pid 71520 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71520' 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71520 00:16:18.705 18:17:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71520 00:16:21.234 ************************************ 00:16:21.234 END TEST xnvme_rpc 00:16:21.234 ************************************ 00:16:21.234 00:16:21.234 real 0m3.929s 00:16:21.234 user 0m4.064s 00:16:21.234 sys 0m0.620s 00:16:21.234 18:17:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.234 18:17:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.234 18:17:55 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:21.234 18:17:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:21.234 18:17:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.234 18:17:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:21.234 ************************************ 00:16:21.234 START TEST xnvme_bdevperf 00:16:21.234 ************************************ 00:16:21.234 18:17:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:21.234 18:17:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:21.234 18:17:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:16:21.234 18:17:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:21.234 18:17:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:21.234 18:17:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:21.234 18:17:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:21.234 18:17:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:21.234 { 00:16:21.234 "subsystems": [ 00:16:21.234 { 00:16:21.234 "subsystem": "bdev", 00:16:21.234 "config": [ 00:16:21.234 { 00:16:21.234 "params": { 00:16:21.234 "io_mechanism": "io_uring", 00:16:21.234 "conserve_cpu": false, 00:16:21.234 "filename": "/dev/nvme0n1", 00:16:21.234 "name": "xnvme_bdev" 00:16:21.234 }, 00:16:21.234 "method": "bdev_xnvme_create" 00:16:21.234 }, 00:16:21.234 { 00:16:21.234 "method": "bdev_wait_for_examine" 00:16:21.234 } 00:16:21.234 ] 00:16:21.234 } 00:16:21.234 ] 00:16:21.234 } 00:16:21.234 [2024-11-26 18:17:55.604415] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:16:21.234 [2024-11-26 18:17:55.604664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71601 ] 00:16:21.492 [2024-11-26 18:17:55.794821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.492 [2024-11-26 18:17:55.930082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.060 Running I/O for 5 seconds... 00:16:23.927 43910.00 IOPS, 171.52 MiB/s [2024-11-26T18:17:59.324Z] 46272.00 IOPS, 180.75 MiB/s [2024-11-26T18:18:00.702Z] 46008.67 IOPS, 179.72 MiB/s [2024-11-26T18:18:01.637Z] 45968.75 IOPS, 179.57 MiB/s 00:16:27.176 Latency(us) 00:16:27.176 [2024-11-26T18:18:01.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.176 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:27.176 xnvme_bdev : 5.00 45998.42 179.68 0.00 0.00 1386.82 404.01 9055.88 00:16:27.176 [2024-11-26T18:18:01.637Z] =================================================================================================================== 00:16:27.176 [2024-11-26T18:18:01.637Z] Total : 45998.42 179.68 0.00 0.00 1386.82 404.01 9055.88 00:16:28.116 18:18:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:28.116 18:18:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:28.116 18:18:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:28.116 18:18:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:28.116 18:18:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:28.116 { 00:16:28.116 "subsystems": [ 00:16:28.116 { 00:16:28.116 "subsystem": "bdev", 00:16:28.116 "config": [ 00:16:28.116 { 00:16:28.116 "params": { 00:16:28.116 "io_mechanism": "io_uring", 00:16:28.116 "conserve_cpu": false, 00:16:28.116 "filename": "/dev/nvme0n1", 00:16:28.116 "name": "xnvme_bdev" 00:16:28.116 }, 00:16:28.116 "method": "bdev_xnvme_create" 00:16:28.116 }, 00:16:28.116 { 00:16:28.116 "method": "bdev_wait_for_examine" 00:16:28.116 } 00:16:28.116 ] 00:16:28.116 } 00:16:28.116 ] 00:16:28.116 } 00:16:28.373 [2024-11-26 18:18:02.625190] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:16:28.373 [2024-11-26 18:18:02.625378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71682 ] 00:16:28.373 [2024-11-26 18:18:02.812724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.631 [2024-11-26 18:18:02.940140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.890 Running I/O for 5 seconds... 00:16:31.201 44358.00 IOPS, 173.27 MiB/s [2024-11-26T18:18:06.596Z] 44666.50 IOPS, 174.48 MiB/s [2024-11-26T18:18:07.532Z] 44845.33 IOPS, 175.18 MiB/s [2024-11-26T18:18:08.465Z] 44654.00 IOPS, 174.43 MiB/s 00:16:34.004 Latency(us) 00:16:34.004 [2024-11-26T18:18:08.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:34.004 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:34.004 xnvme_bdev : 5.00 44747.14 174.79 0.00 0.00 1425.62 185.25 6821.70 00:16:34.004 [2024-11-26T18:18:08.465Z] =================================================================================================================== 00:16:34.004 [2024-11-26T18:18:08.465Z] Total : 44747.14 174.79 0.00 0.00 1425.62 185.25 6821.70 00:16:34.937 00:16:34.937 real 0m13.889s 00:16:34.937 user 0m7.274s 00:16:34.937 sys 0m6.385s 00:16:34.937 18:18:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.937 18:18:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:34.937 ************************************ 00:16:34.937 END TEST xnvme_bdevperf 00:16:34.937 ************************************ 00:16:35.194 18:18:09 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:35.194 18:18:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:35.194 18:18:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.194 18:18:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:35.194 ************************************ 00:16:35.194 START TEST xnvme_fio_plugin 00:16:35.194 ************************************ 00:16:35.194 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:35.194 18:18:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:35.194 18:18:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:16:35.194 18:18:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:35.194 18:18:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:35.194 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:35.194 18:18:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:35.194 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:35.194 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:35.195 18:18:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:35.195 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:35.195 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:35.195 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:35.195 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:35.195 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:35.195 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:35.195 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:35.195 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:35.195 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:35.195 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:35.195 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:35.195 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:35.195 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:35.195 18:18:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:35.195 { 00:16:35.195 "subsystems": [ 00:16:35.195 { 00:16:35.195 "subsystem": "bdev", 00:16:35.195 "config": [ 00:16:35.195 { 00:16:35.195 "params": { 00:16:35.195 "io_mechanism": "io_uring", 00:16:35.195 "conserve_cpu": false, 00:16:35.195 "filename": "/dev/nvme0n1", 00:16:35.195 "name": "xnvme_bdev" 00:16:35.195 }, 00:16:35.195 "method": "bdev_xnvme_create" 00:16:35.195 }, 00:16:35.195 { 00:16:35.195 "method": "bdev_wait_for_examine" 00:16:35.195 } 00:16:35.195 ] 00:16:35.195 } 00:16:35.195 ] 00:16:35.195 } 00:16:35.453 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:35.453 fio-3.35 00:16:35.453 Starting 1 thread 00:16:42.009 00:16:42.009 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71802: Tue Nov 26 18:18:15 2024 00:16:42.009 read: IOPS=45.9k, BW=179MiB/s (188MB/s)(897MiB/5001msec) 00:16:42.009 slat (usec): min=2, max=140, avg= 4.10, stdev= 2.05 00:16:42.009 clat (usec): min=166, max=8616, avg=1236.08, stdev=313.43 00:16:42.009 lat (usec): min=177, max=8619, avg=1240.18, stdev=313.74 00:16:42.009 clat percentiles (usec): 00:16:42.009 | 1.00th=[ 873], 5.00th=[ 979], 10.00th=[ 1020], 20.00th=[ 1074], 00:16:42.009 | 30.00th=[ 1106], 40.00th=[ 1156], 50.00th=[ 1188], 60.00th=[ 1221], 00:16:42.009 | 70.00th=[ 1270], 80.00th=[ 1336], 90.00th=[ 1434], 95.00th=[ 1614], 00:16:42.009 | 99.00th=[ 2474], 99.50th=[ 3326], 99.90th=[ 4686], 99.95th=[ 5211], 00:16:42.009 | 99.99th=[ 6587] 00:16:42.009 bw ( KiB/s): min=164248, max=199168, per=100.00%, avg=184056.00, stdev=14715.63, samples=9 00:16:42.009 iops : min=41062, max=49792, avg=46014.00, stdev=3678.91, samples=9 00:16:42.009 lat (usec) : 250=0.01%, 500=0.10%, 750=0.26%, 1000=7.16% 00:16:42.009 lat (msec) : 2=90.58%, 4=1.67%, 10=0.23% 00:16:42.009 cpu : usr=36.36%, sys=62.66%, ctx=11, majf=0, minf=762 00:16:42.009 IO depths : 1=1.2%, 2=2.6%, 4=5.6%, 8=12.0%, 16=25.0%, 32=51.9%, >=64=1.7% 00:16:42.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:42.009 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:16:42.009 issued rwts: total=229585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:42.009 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:42.009 00:16:42.009 Run status group 0 (all jobs): 00:16:42.009 READ: bw=179MiB/s (188MB/s), 179MiB/s-179MiB/s (188MB/s-188MB/s), io=897MiB (940MB), run=5001-5001msec 00:16:42.576 ----------------------------------------------------- 00:16:42.576 Suppressions used: 00:16:42.576 count bytes template 00:16:42.576 1 11 /usr/src/fio/parse.c 00:16:42.576 1 8 libtcmalloc_minimal.so 00:16:42.576 1 904 libcrypto.so 00:16:42.576 ----------------------------------------------------- 00:16:42.576 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:42.576 18:18:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:42.576 { 00:16:42.576 "subsystems": [ 00:16:42.576 { 00:16:42.576 "subsystem": "bdev", 00:16:42.576 "config": [ 00:16:42.576 { 00:16:42.576 "params": { 00:16:42.576 "io_mechanism": "io_uring", 00:16:42.576 "conserve_cpu": false, 00:16:42.576 "filename": "/dev/nvme0n1", 00:16:42.576 "name": "xnvme_bdev" 00:16:42.576 }, 00:16:42.576 "method": "bdev_xnvme_create" 00:16:42.576 }, 00:16:42.576 { 00:16:42.576 "method": "bdev_wait_for_examine" 00:16:42.576 } 00:16:42.576 ] 00:16:42.576 } 00:16:42.576 ] 00:16:42.576 } 00:16:42.834 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:42.834 fio-3.35 00:16:42.834 Starting 1 thread 00:16:49.392 00:16:49.392 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71900: Tue Nov 26 18:18:22 2024 00:16:49.392 write: IOPS=43.8k, BW=171MiB/s (179MB/s)(855MiB/5002msec); 0 zone resets 00:16:49.392 slat (nsec): min=2820, max=79203, avg=5004.60, stdev=3040.95 00:16:49.392 clat (usec): min=277, max=4088, avg=1262.11, stdev=252.70 00:16:49.392 lat (usec): min=281, max=4121, avg=1267.11, stdev=254.21 00:16:49.392 clat percentiles (usec): 00:16:49.392 | 1.00th=[ 922], 5.00th=[ 988], 10.00th=[ 1029], 20.00th=[ 1090], 00:16:49.392 | 30.00th=[ 1139], 40.00th=[ 1172], 50.00th=[ 1221], 60.00th=[ 1270], 00:16:49.392 | 70.00th=[ 1319], 80.00th=[ 1385], 90.00th=[ 1516], 95.00th=[ 1663], 00:16:49.392 | 99.00th=[ 2212], 99.50th=[ 2704], 99.90th=[ 3392], 99.95th=[ 3523], 00:16:49.392 | 99.99th=[ 3720] 00:16:49.392 bw ( KiB/s): min=149504, max=205568, per=100.00%, avg=176287.11, stdev=17489.95, samples=9 00:16:49.392 iops : min=37376, max=51392, avg=44071.78, stdev=4372.49, samples=9 00:16:49.392 lat (usec) : 500=0.01%, 750=0.03%, 1000=6.26% 00:16:49.392 lat (msec) : 2=92.27%, 4=1.43%, 10=0.01% 00:16:49.392 cpu : usr=42.05%, sys=56.85%, ctx=17, majf=0, minf=763 00:16:49.392 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:49.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:49.392 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:16:49.392 issued rwts: total=0,218963,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:49.392 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:49.392 00:16:49.392 Run status group 0 (all jobs): 00:16:49.392 WRITE: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=855MiB (897MB), run=5002-5002msec 00:16:50.034 ----------------------------------------------------- 00:16:50.034 Suppressions used: 00:16:50.034 count bytes template 00:16:50.034 1 11 /usr/src/fio/parse.c 00:16:50.034 1 8 libtcmalloc_minimal.so 00:16:50.034 1 904 libcrypto.so 00:16:50.034 ----------------------------------------------------- 00:16:50.034 00:16:50.034 00:16:50.034 real 0m14.905s 00:16:50.034 user 0m7.741s 00:16:50.034 sys 0m6.770s 00:16:50.034 18:18:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.034 18:18:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:50.034 ************************************ 00:16:50.034 END TEST xnvme_fio_plugin 00:16:50.034 ************************************ 00:16:50.034 18:18:24 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:50.034 18:18:24 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:50.034 18:18:24 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:50.034 18:18:24 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:50.034 18:18:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:50.034 18:18:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.034 18:18:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:50.034 ************************************ 00:16:50.034 START TEST xnvme_rpc 00:16:50.034 ************************************ 00:16:50.034 18:18:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:50.034 18:18:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:50.034 18:18:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:50.034 18:18:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:50.034 18:18:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:50.034 18:18:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71986 00:16:50.034 18:18:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71986 00:16:50.034 18:18:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.034 18:18:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71986 ']' 00:16:50.034 18:18:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.034 18:18:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.034 18:18:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.034 18:18:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.034 18:18:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.310 [2024-11-26 18:18:24.539956] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:16:50.310 [2024-11-26 18:18:24.540215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71986 ] 00:16:50.310 [2024-11-26 18:18:24.731019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.571 [2024-11-26 18:18:24.858590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.506 xnvme_bdev 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71986 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71986 ']' 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71986 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71986 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.506 killing process with pid 71986 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71986' 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71986 00:16:51.506 18:18:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71986 00:16:54.039 00:16:54.039 real 0m3.662s 00:16:54.039 user 0m3.796s 00:16:54.039 sys 0m0.582s 00:16:54.039 18:18:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:54.039 18:18:28 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.039 ************************************ 00:16:54.039 END TEST xnvme_rpc 00:16:54.039 ************************************ 00:16:54.039 18:18:28 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:54.039 18:18:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:54.039 18:18:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.039 18:18:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:54.039 ************************************ 00:16:54.039 START TEST xnvme_bdevperf 00:16:54.039 ************************************ 00:16:54.039 18:18:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:54.039 18:18:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:54.039 18:18:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:16:54.039 18:18:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:54.039 18:18:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:54.039 18:18:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:54.039 18:18:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:54.039 18:18:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:54.039 { 00:16:54.039 "subsystems": [ 00:16:54.039 { 00:16:54.039 "subsystem": "bdev", 00:16:54.039 "config": [ 00:16:54.039 { 00:16:54.039 "params": { 00:16:54.039 "io_mechanism": "io_uring", 00:16:54.039 "conserve_cpu": true, 00:16:54.039 "filename": "/dev/nvme0n1", 00:16:54.039 "name": "xnvme_bdev" 00:16:54.039 }, 00:16:54.039 "method": "bdev_xnvme_create" 00:16:54.039 }, 00:16:54.039 { 00:16:54.039 "method": "bdev_wait_for_examine" 00:16:54.039 } 00:16:54.039 ] 00:16:54.039 } 00:16:54.039 ] 00:16:54.039 } 00:16:54.039 [2024-11-26 18:18:28.225208] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:16:54.039 [2024-11-26 18:18:28.225401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72066 ] 00:16:54.039 [2024-11-26 18:18:28.405996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.299 [2024-11-26 18:18:28.549870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.557 Running I/O for 5 seconds... 00:16:56.868 47424.00 IOPS, 185.25 MiB/s [2024-11-26T18:18:32.263Z] 48254.50 IOPS, 188.49 MiB/s [2024-11-26T18:18:33.199Z] 49214.67 IOPS, 192.24 MiB/s [2024-11-26T18:18:34.134Z] 49327.00 IOPS, 192.68 MiB/s [2024-11-26T18:18:34.134Z] 49694.80 IOPS, 194.12 MiB/s 00:16:59.673 Latency(us) 00:16:59.673 [2024-11-26T18:18:34.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.673 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:59.673 xnvme_bdev : 5.00 49674.01 194.04 0.00 0.00 1284.73 297.89 4230.05 00:16:59.673 [2024-11-26T18:18:34.134Z] =================================================================================================================== 00:16:59.673 [2024-11-26T18:18:34.134Z] Total : 49674.01 194.04 0.00 0.00 1284.73 297.89 4230.05 00:17:00.605 18:18:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:00.605 18:18:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:00.605 18:18:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:00.605 18:18:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:00.605 18:18:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:00.605 { 00:17:00.605 "subsystems": [ 00:17:00.605 { 00:17:00.605 "subsystem": "bdev", 00:17:00.605 "config": [ 00:17:00.605 { 00:17:00.605 "params": { 00:17:00.605 "io_mechanism": "io_uring", 00:17:00.605 "conserve_cpu": true, 00:17:00.605 "filename": "/dev/nvme0n1", 00:17:00.605 "name": "xnvme_bdev" 00:17:00.606 }, 00:17:00.606 "method": "bdev_xnvme_create" 00:17:00.606 }, 00:17:00.606 { 00:17:00.606 "method": "bdev_wait_for_examine" 00:17:00.606 } 00:17:00.606 ] 00:17:00.606 } 00:17:00.606 ] 00:17:00.606 } 00:17:00.606 [2024-11-26 18:18:34.990461] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:17:00.606 [2024-11-26 18:18:34.990692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72142 ] 00:17:00.863 [2024-11-26 18:18:35.177211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.863 [2024-11-26 18:18:35.308101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.429 Running I/O for 5 seconds... 00:17:03.385 42688.00 IOPS, 166.75 MiB/s [2024-11-26T18:18:38.778Z] 43008.00 IOPS, 168.00 MiB/s [2024-11-26T18:18:39.719Z] 42752.00 IOPS, 167.00 MiB/s [2024-11-26T18:18:41.093Z] 42608.00 IOPS, 166.44 MiB/s 00:17:06.632 Latency(us) 00:17:06.632 [2024-11-26T18:18:41.093Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.632 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:06.632 xnvme_bdev : 5.00 42581.03 166.33 0.00 0.00 1497.97 875.05 6374.87 00:17:06.632 [2024-11-26T18:18:41.093Z] =================================================================================================================== 00:17:06.632 [2024-11-26T18:18:41.093Z] Total : 42581.03 166.33 0.00 0.00 1497.97 875.05 6374.87 00:17:07.200 00:17:07.200 real 0m13.494s 00:17:07.200 user 0m8.208s 00:17:07.200 sys 0m4.730s 00:17:07.200 18:18:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.200 18:18:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:07.200 ************************************ 00:17:07.200 END TEST xnvme_bdevperf 00:17:07.200 ************************************ 00:17:07.459 18:18:41 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:07.459 18:18:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:07.459 18:18:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:07.459 18:18:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:07.459 ************************************ 00:17:07.459 START TEST xnvme_fio_plugin 00:17:07.459 ************************************ 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:07.459 18:18:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:07.459 { 00:17:07.459 "subsystems": [ 00:17:07.459 { 00:17:07.459 "subsystem": "bdev", 00:17:07.459 "config": [ 00:17:07.459 { 00:17:07.459 "params": { 00:17:07.459 "io_mechanism": "io_uring", 00:17:07.459 "conserve_cpu": true, 00:17:07.459 "filename": "/dev/nvme0n1", 00:17:07.459 "name": "xnvme_bdev" 00:17:07.459 }, 00:17:07.459 "method": "bdev_xnvme_create" 00:17:07.459 }, 00:17:07.459 { 00:17:07.459 "method": "bdev_wait_for_examine" 00:17:07.459 } 00:17:07.459 ] 00:17:07.459 } 00:17:07.459 ] 00:17:07.459 } 00:17:07.718 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:07.718 fio-3.35 00:17:07.718 Starting 1 thread 00:17:14.279 00:17:14.279 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72266: Tue Nov 26 18:18:47 2024 00:17:14.279 read: IOPS=49.7k, BW=194MiB/s (204MB/s)(971MiB/5001msec) 00:17:14.279 slat (usec): min=2, max=1382, avg= 3.68, stdev= 3.39 00:17:14.279 clat (usec): min=810, max=2876, avg=1139.52, stdev=133.70 00:17:14.279 lat (usec): min=814, max=2902, avg=1143.20, stdev=134.30 00:17:14.279 clat percentiles (usec): 00:17:14.279 | 1.00th=[ 914], 5.00th=[ 963], 10.00th=[ 996], 20.00th=[ 1037], 00:17:14.279 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1156], 00:17:14.279 | 70.00th=[ 1188], 80.00th=[ 1237], 90.00th=[ 1303], 95.00th=[ 1369], 00:17:14.279 | 99.00th=[ 1598], 99.50th=[ 1696], 99.90th=[ 1893], 99.95th=[ 2024], 00:17:14.279 | 99.99th=[ 2638] 00:17:14.279 bw ( KiB/s): min=182784, max=210432, per=99.46%, avg=197745.78, stdev=9522.99, samples=9 00:17:14.279 iops : min=45696, max=52608, avg=49436.44, stdev=2380.75, samples=9 00:17:14.279 lat (usec) : 1000=11.66% 00:17:14.279 lat (msec) : 2=88.29%, 4=0.06% 00:17:14.279 cpu : usr=44.44%, sys=50.94%, ctx=10, majf=0, minf=762 00:17:14.279 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:14.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:14.279 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:14.279 issued rwts: total=248576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:14.279 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:14.279 00:17:14.279 Run status group 0 (all jobs): 00:17:14.279 READ: bw=194MiB/s (204MB/s), 194MiB/s-194MiB/s (204MB/s-204MB/s), io=971MiB (1018MB), run=5001-5001msec 00:17:14.538 ----------------------------------------------------- 00:17:14.538 Suppressions used: 00:17:14.538 count bytes template 00:17:14.538 1 11 /usr/src/fio/parse.c 00:17:14.538 1 8 libtcmalloc_minimal.so 00:17:14.538 1 904 libcrypto.so 00:17:14.538 ----------------------------------------------------- 00:17:14.538 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:14.538 18:18:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:14.538 { 00:17:14.538 "subsystems": [ 00:17:14.538 { 00:17:14.538 "subsystem": "bdev", 00:17:14.538 "config": [ 00:17:14.538 { 00:17:14.538 "params": { 00:17:14.538 "io_mechanism": "io_uring", 00:17:14.538 "conserve_cpu": true, 00:17:14.538 "filename": "/dev/nvme0n1", 00:17:14.538 "name": "xnvme_bdev" 00:17:14.538 }, 00:17:14.538 "method": "bdev_xnvme_create" 00:17:14.538 }, 00:17:14.538 { 00:17:14.538 "method": "bdev_wait_for_examine" 00:17:14.538 } 00:17:14.538 ] 00:17:14.538 } 00:17:14.538 ] 00:17:14.538 } 00:17:14.872 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:14.872 fio-3.35 00:17:14.872 Starting 1 thread 00:17:21.456 00:17:21.456 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72359: Tue Nov 26 18:18:54 2024 00:17:21.456 write: IOPS=47.2k, BW=184MiB/s (193MB/s)(922MiB/5002msec); 0 zone resets 00:17:21.456 slat (usec): min=2, max=171, avg= 4.02, stdev= 2.38 00:17:21.456 clat (usec): min=816, max=2959, avg=1197.25, stdev=149.49 00:17:21.456 lat (usec): min=820, max=2994, avg=1201.27, stdev=149.99 00:17:21.456 clat percentiles (usec): 00:17:21.456 | 1.00th=[ 938], 5.00th=[ 996], 10.00th=[ 1037], 20.00th=[ 1074], 00:17:21.456 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1221], 00:17:21.456 | 70.00th=[ 1254], 80.00th=[ 1303], 90.00th=[ 1369], 95.00th=[ 1450], 00:17:21.456 | 99.00th=[ 1696], 99.50th=[ 1795], 99.90th=[ 1975], 99.95th=[ 2040], 00:17:21.456 | 99.99th=[ 2737] 00:17:21.456 bw ( KiB/s): min=177664, max=209920, per=100.00%, avg=189494.22, stdev=10593.25, samples=9 00:17:21.456 iops : min=44416, max=52480, avg=47373.56, stdev=2648.31, samples=9 00:17:21.456 lat (usec) : 1000=5.35% 00:17:21.456 lat (msec) : 2=94.58%, 4=0.07% 00:17:21.456 cpu : usr=58.95%, sys=36.83%, ctx=13, majf=0, minf=763 00:17:21.456 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:21.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.456 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:21.456 issued rwts: total=0,236093,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.456 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:21.456 00:17:21.456 Run status group 0 (all jobs): 00:17:21.456 WRITE: bw=184MiB/s (193MB/s), 184MiB/s-184MiB/s (193MB/s-193MB/s), io=922MiB (967MB), run=5002-5002msec 00:17:22.022 ----------------------------------------------------- 00:17:22.022 Suppressions used: 00:17:22.022 count bytes template 00:17:22.022 1 11 /usr/src/fio/parse.c 00:17:22.022 1 8 libtcmalloc_minimal.so 00:17:22.022 1 904 libcrypto.so 00:17:22.022 ----------------------------------------------------- 00:17:22.022 00:17:22.022 ************************************ 00:17:22.022 END TEST xnvme_fio_plugin 00:17:22.022 ************************************ 00:17:22.022 00:17:22.022 real 0m14.621s 00:17:22.022 user 0m8.739s 00:17:22.022 sys 0m5.159s 00:17:22.022 18:18:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.022 18:18:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:22.022 18:18:56 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:17:22.022 18:18:56 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:17:22.022 18:18:56 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:17:22.022 18:18:56 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:17:22.022 18:18:56 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:17:22.022 18:18:56 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:22.022 18:18:56 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:17:22.022 18:18:56 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:17:22.022 18:18:56 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:22.022 18:18:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:22.022 18:18:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.022 18:18:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:22.022 ************************************ 00:17:22.022 START TEST xnvme_rpc 00:17:22.022 ************************************ 00:17:22.022 18:18:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:22.022 18:18:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:22.022 18:18:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:22.022 18:18:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:22.022 18:18:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:22.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.023 18:18:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72440 00:17:22.023 18:18:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72440 00:17:22.023 18:18:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72440 ']' 00:17:22.023 18:18:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:22.023 18:18:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.023 18:18:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.023 18:18:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.023 18:18:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.023 18:18:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:22.023 [2024-11-26 18:18:56.460223] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:17:22.023 [2024-11-26 18:18:56.460608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72440 ] 00:17:22.280 [2024-11-26 18:18:56.636089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.538 [2024-11-26 18:18:56.758560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.472 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.472 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:23.472 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:17:23.472 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.472 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.472 xnvme_bdev 00:17:23.472 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.472 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:23.472 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:23.472 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.472 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.472 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72440 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72440 ']' 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72440 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72440 00:17:23.473 killing process with pid 72440 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72440' 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72440 00:17:23.473 18:18:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72440 00:17:25.999 00:17:25.999 real 0m3.522s 00:17:25.999 user 0m3.711s 00:17:25.999 sys 0m0.552s 00:17:25.999 18:18:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.999 ************************************ 00:17:25.999 END TEST xnvme_rpc 00:17:25.999 ************************************ 00:17:25.999 18:18:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:25.999 18:18:59 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:25.999 18:18:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:25.999 18:18:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.999 18:18:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:25.999 ************************************ 00:17:25.999 START TEST xnvme_bdevperf 00:17:25.999 ************************************ 00:17:25.999 18:18:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:25.999 18:18:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:25.999 18:18:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:17:25.999 18:18:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:25.999 18:18:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:25.999 18:18:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:25.999 18:18:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:25.999 18:18:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:25.999 { 00:17:25.999 "subsystems": [ 00:17:25.999 { 00:17:25.999 "subsystem": "bdev", 00:17:25.999 "config": [ 00:17:25.999 { 00:17:25.999 "params": { 00:17:25.999 "io_mechanism": "io_uring_cmd", 00:17:25.999 "conserve_cpu": false, 00:17:25.999 "filename": "/dev/ng0n1", 00:17:25.999 "name": "xnvme_bdev" 00:17:25.999 }, 00:17:25.999 "method": "bdev_xnvme_create" 00:17:25.999 }, 00:17:25.999 { 00:17:25.999 "method": "bdev_wait_for_examine" 00:17:25.999 } 00:17:25.999 ] 00:17:25.999 } 00:17:25.999 ] 00:17:25.999 } 00:17:25.999 [2024-11-26 18:19:00.024914] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:17:25.999 [2024-11-26 18:19:00.025250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72524 ] 00:17:25.999 [2024-11-26 18:19:00.194388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.999 [2024-11-26 18:19:00.317015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.257 Running I/O for 5 seconds... 00:17:28.198 53056.00 IOPS, 207.25 MiB/s [2024-11-26T18:19:04.035Z] 54240.00 IOPS, 211.88 MiB/s [2024-11-26T18:19:04.969Z] 54442.67 IOPS, 212.67 MiB/s [2024-11-26T18:19:05.904Z] 53696.00 IOPS, 209.75 MiB/s 00:17:31.443 Latency(us) 00:17:31.443 [2024-11-26T18:19:05.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:31.443 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:31.443 xnvme_bdev : 5.00 53462.63 208.84 0.00 0.00 1193.59 800.58 3038.49 00:17:31.443 [2024-11-26T18:19:05.904Z] =================================================================================================================== 00:17:31.443 [2024-11-26T18:19:05.904Z] Total : 53462.63 208.84 0.00 0.00 1193.59 800.58 3038.49 00:17:32.377 18:19:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:32.377 18:19:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:32.377 18:19:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:32.377 18:19:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:32.377 18:19:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:32.377 { 00:17:32.377 "subsystems": [ 00:17:32.377 { 00:17:32.377 "subsystem": "bdev", 00:17:32.377 "config": [ 00:17:32.377 { 00:17:32.377 "params": { 00:17:32.377 "io_mechanism": "io_uring_cmd", 00:17:32.377 "conserve_cpu": false, 00:17:32.377 "filename": "/dev/ng0n1", 00:17:32.377 "name": "xnvme_bdev" 00:17:32.377 }, 00:17:32.377 "method": "bdev_xnvme_create" 00:17:32.377 }, 00:17:32.377 { 00:17:32.377 "method": "bdev_wait_for_examine" 00:17:32.377 } 00:17:32.377 ] 00:17:32.377 } 00:17:32.377 ] 00:17:32.377 } 00:17:32.377 [2024-11-26 18:19:06.731197] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:17:32.377 [2024-11-26 18:19:06.731423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72595 ] 00:17:32.635 [2024-11-26 18:19:06.919112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.635 [2024-11-26 18:19:07.048954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.202 Running I/O for 5 seconds... 00:17:35.070 48253.00 IOPS, 188.49 MiB/s [2024-11-26T18:19:10.464Z] 47359.50 IOPS, 185.00 MiB/s [2024-11-26T18:19:11.398Z] 46677.00 IOPS, 182.33 MiB/s [2024-11-26T18:19:12.773Z] 46063.75 IOPS, 179.94 MiB/s [2024-11-26T18:19:12.773Z] 46310.20 IOPS, 180.90 MiB/s 00:17:38.312 Latency(us) 00:17:38.312 [2024-11-26T18:19:12.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.312 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:38.312 xnvme_bdev : 5.00 46295.80 180.84 0.00 0.00 1377.77 573.44 3961.95 00:17:38.312 [2024-11-26T18:19:12.773Z] =================================================================================================================== 00:17:38.312 [2024-11-26T18:19:12.773Z] Total : 46295.80 180.84 0.00 0.00 1377.77 573.44 3961.95 00:17:39.271 18:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:39.271 18:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:17:39.271 18:19:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:39.271 18:19:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:39.271 18:19:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:39.271 { 00:17:39.271 "subsystems": [ 00:17:39.271 { 00:17:39.271 "subsystem": "bdev", 00:17:39.271 "config": [ 00:17:39.271 { 00:17:39.271 "params": { 00:17:39.271 "io_mechanism": "io_uring_cmd", 00:17:39.271 "conserve_cpu": false, 00:17:39.271 "filename": "/dev/ng0n1", 00:17:39.271 "name": "xnvme_bdev" 00:17:39.271 }, 00:17:39.271 "method": "bdev_xnvme_create" 00:17:39.271 }, 00:17:39.271 { 00:17:39.271 "method": "bdev_wait_for_examine" 00:17:39.271 } 00:17:39.271 ] 00:17:39.271 } 00:17:39.271 ] 00:17:39.271 } 00:17:39.271 [2024-11-26 18:19:13.470652] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:17:39.271 [2024-11-26 18:19:13.470849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72681 ] 00:17:39.271 [2024-11-26 18:19:13.658797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.530 [2024-11-26 18:19:13.776610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.787 Running I/O for 5 seconds... 00:17:41.655 81344.00 IOPS, 317.75 MiB/s [2024-11-26T18:19:17.490Z] 83104.00 IOPS, 324.62 MiB/s [2024-11-26T18:19:18.424Z] 79274.67 IOPS, 309.67 MiB/s [2024-11-26T18:19:19.361Z] 78256.00 IOPS, 305.69 MiB/s 00:17:44.900 Latency(us) 00:17:44.900 [2024-11-26T18:19:19.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.900 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:17:44.900 xnvme_bdev : 5.00 78543.48 306.81 0.00 0.00 811.30 446.84 2800.17 00:17:44.900 [2024-11-26T18:19:19.361Z] =================================================================================================================== 00:17:44.900 [2024-11-26T18:19:19.362Z] Total : 78543.48 306.81 0.00 0.00 811.30 446.84 2800.17 00:17:45.839 18:19:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:45.839 18:19:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:17:45.839 18:19:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:45.839 18:19:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:45.839 18:19:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:45.839 { 00:17:45.839 "subsystems": [ 00:17:45.839 { 00:17:45.839 "subsystem": "bdev", 00:17:45.839 "config": [ 00:17:45.839 { 00:17:45.839 "params": { 00:17:45.839 "io_mechanism": "io_uring_cmd", 00:17:45.839 "conserve_cpu": false, 00:17:45.839 "filename": "/dev/ng0n1", 00:17:45.839 "name": "xnvme_bdev" 00:17:45.839 }, 00:17:45.839 "method": "bdev_xnvme_create" 00:17:45.839 }, 00:17:45.839 { 00:17:45.839 "method": "bdev_wait_for_examine" 00:17:45.839 } 00:17:45.839 ] 00:17:45.839 } 00:17:45.839 ] 00:17:45.839 } 00:17:45.839 [2024-11-26 18:19:20.206771] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:17:45.839 [2024-11-26 18:19:20.206960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72758 ] 00:17:46.168 [2024-11-26 18:19:20.391346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.168 [2024-11-26 18:19:20.514097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.425 Running I/O for 5 seconds... 00:17:48.729 45274.00 IOPS, 176.85 MiB/s [2024-11-26T18:19:24.122Z] 42904.00 IOPS, 167.59 MiB/s [2024-11-26T18:19:25.056Z] 42603.67 IOPS, 166.42 MiB/s [2024-11-26T18:19:25.989Z] 42328.25 IOPS, 165.34 MiB/s [2024-11-26T18:19:25.989Z] 42214.20 IOPS, 164.90 MiB/s 00:17:51.528 Latency(us) 00:17:51.528 [2024-11-26T18:19:25.989Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.528 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:17:51.528 xnvme_bdev : 5.00 42192.76 164.82 0.00 0.00 1512.58 81.92 16920.20 00:17:51.528 [2024-11-26T18:19:25.989Z] =================================================================================================================== 00:17:51.528 [2024-11-26T18:19:25.989Z] Total : 42192.76 164.82 0.00 0.00 1512.58 81.92 16920.20 00:17:52.903 00:17:52.903 real 0m27.083s 00:17:52.903 user 0m14.628s 00:17:52.903 sys 0m12.056s 00:17:52.903 18:19:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.903 ************************************ 00:17:52.903 END TEST xnvme_bdevperf 00:17:52.903 ************************************ 00:17:52.903 18:19:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:52.903 18:19:27 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:52.904 18:19:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:52.904 18:19:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.904 18:19:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:52.904 ************************************ 00:17:52.904 START TEST xnvme_fio_plugin 00:17:52.904 ************************************ 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:52.904 18:19:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:52.904 { 00:17:52.904 "subsystems": [ 00:17:52.904 { 00:17:52.904 "subsystem": "bdev", 00:17:52.904 "config": [ 00:17:52.904 { 00:17:52.904 "params": { 00:17:52.904 "io_mechanism": "io_uring_cmd", 00:17:52.904 "conserve_cpu": false, 00:17:52.904 "filename": "/dev/ng0n1", 00:17:52.904 "name": "xnvme_bdev" 00:17:52.904 }, 00:17:52.904 "method": "bdev_xnvme_create" 00:17:52.904 }, 00:17:52.904 { 00:17:52.904 "method": "bdev_wait_for_examine" 00:17:52.904 } 00:17:52.904 ] 00:17:52.904 } 00:17:52.904 ] 00:17:52.904 } 00:17:52.904 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:52.904 fio-3.35 00:17:52.904 Starting 1 thread 00:17:59.455 00:17:59.455 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72878: Tue Nov 26 18:19:33 2024 00:17:59.455 read: IOPS=47.8k, BW=187MiB/s (196MB/s)(935MiB/5001msec) 00:17:59.455 slat (usec): min=2, max=314, avg= 4.29, stdev= 2.72 00:17:59.455 clat (usec): min=538, max=2594, avg=1166.36, stdev=172.05 00:17:59.455 lat (usec): min=542, max=2634, avg=1170.65, stdev=172.75 00:17:59.455 clat percentiles (usec): 00:17:59.455 | 1.00th=[ 873], 5.00th=[ 938], 10.00th=[ 979], 20.00th=[ 1029], 00:17:59.455 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1139], 60.00th=[ 1188], 00:17:59.455 | 70.00th=[ 1221], 80.00th=[ 1287], 90.00th=[ 1369], 95.00th=[ 1483], 00:17:59.455 | 99.00th=[ 1745], 99.50th=[ 1827], 99.90th=[ 2024], 99.95th=[ 2114], 00:17:59.455 | 99.99th=[ 2376] 00:17:59.455 bw ( KiB/s): min=174592, max=214528, per=100.00%, avg=191687.11, stdev=14997.20, samples=9 00:17:59.455 iops : min=43648, max=53632, avg=47921.78, stdev=3749.30, samples=9 00:17:59.455 lat (usec) : 750=0.02%, 1000=13.98% 00:17:59.455 lat (msec) : 2=85.89%, 4=0.12% 00:17:59.455 cpu : usr=41.98%, sys=56.52%, ctx=71, majf=0, minf=762 00:17:59.455 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:59.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.455 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:59.455 issued rwts: total=239232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.455 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:59.455 00:17:59.455 Run status group 0 (all jobs): 00:17:59.455 READ: bw=187MiB/s (196MB/s), 187MiB/s-187MiB/s (196MB/s-196MB/s), io=935MiB (980MB), run=5001-5001msec 00:18:00.050 ----------------------------------------------------- 00:18:00.050 Suppressions used: 00:18:00.050 count bytes template 00:18:00.050 1 11 /usr/src/fio/parse.c 00:18:00.050 1 8 libtcmalloc_minimal.so 00:18:00.050 1 904 libcrypto.so 00:18:00.050 ----------------------------------------------------- 00:18:00.050 00:18:00.050 18:19:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:00.050 18:19:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:00.050 18:19:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:00.050 18:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:00.050 18:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:00.050 18:19:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:00.050 18:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:00.050 18:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:00.050 18:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:00.050 18:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:00.050 18:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:00.050 18:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:00.050 18:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:00.050 18:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:00.050 18:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:00.050 18:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:00.309 18:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:00.309 18:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:00.309 18:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:00.309 18:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:00.309 18:19:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:00.309 { 00:18:00.309 "subsystems": [ 00:18:00.309 { 00:18:00.309 "subsystem": "bdev", 00:18:00.309 "config": [ 00:18:00.309 { 00:18:00.309 "params": { 00:18:00.309 "io_mechanism": "io_uring_cmd", 00:18:00.309 "conserve_cpu": false, 00:18:00.309 "filename": "/dev/ng0n1", 00:18:00.309 "name": "xnvme_bdev" 00:18:00.309 }, 00:18:00.309 "method": "bdev_xnvme_create" 00:18:00.309 }, 00:18:00.309 { 00:18:00.309 "method": "bdev_wait_for_examine" 00:18:00.309 } 00:18:00.309 ] 00:18:00.309 } 00:18:00.309 ] 00:18:00.309 } 00:18:00.567 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:00.567 fio-3.35 00:18:00.567 Starting 1 thread 00:18:07.145 00:18:07.145 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72974: Tue Nov 26 18:19:40 2024 00:18:07.145 write: IOPS=38.1k, BW=149MiB/s (156MB/s)(744MiB/5001msec); 0 zone resets 00:18:07.145 slat (usec): min=2, max=127, avg= 5.21, stdev= 2.96 00:18:07.145 clat (usec): min=73, max=25264, avg=1507.24, stdev=1284.09 00:18:07.145 lat (usec): min=79, max=25268, avg=1512.45, stdev=1284.27 00:18:07.145 clat percentiles (usec): 00:18:07.145 | 1.00th=[ 233], 5.00th=[ 586], 10.00th=[ 930], 20.00th=[ 1045], 00:18:07.145 | 30.00th=[ 1106], 40.00th=[ 1172], 50.00th=[ 1221], 60.00th=[ 1287], 00:18:07.145 | 70.00th=[ 1352], 80.00th=[ 1467], 90.00th=[ 2114], 95.00th=[ 3720], 00:18:07.145 | 99.00th=[ 7308], 99.50th=[ 9634], 99.90th=[13304], 99.95th=[16712], 00:18:07.145 | 99.99th=[23200] 00:18:07.145 bw ( KiB/s): min=95800, max=201728, per=100.00%, avg=154016.00, stdev=35411.83, samples=9 00:18:07.145 iops : min=23950, max=50432, avg=38504.00, stdev=8852.96, samples=9 00:18:07.145 lat (usec) : 100=0.02%, 250=1.15%, 500=2.74%, 750=2.81%, 1000=8.14% 00:18:07.145 lat (msec) : 2=74.64%, 4=6.32%, 10=3.73%, 20=0.43%, 50=0.03% 00:18:07.145 cpu : usr=39.26%, sys=59.48%, ctx=6, majf=0, minf=763 00:18:07.145 IO depths : 1=1.2%, 2=2.4%, 4=4.9%, 8=9.9%, 16=20.5%, 32=57.3%, >=64=3.8% 00:18:07.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.145 complete : 0=0.0%, 4=97.6%, 8=0.4%, 16=0.4%, 32=0.3%, 64=1.3%, >=64=0.0% 00:18:07.145 issued rwts: total=0,190370,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.145 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:07.145 00:18:07.145 Run status group 0 (all jobs): 00:18:07.145 WRITE: bw=149MiB/s (156MB/s), 149MiB/s-149MiB/s (156MB/s-156MB/s), io=744MiB (780MB), run=5001-5001msec 00:18:07.711 ----------------------------------------------------- 00:18:07.711 Suppressions used: 00:18:07.711 count bytes template 00:18:07.711 1 11 /usr/src/fio/parse.c 00:18:07.711 1 8 libtcmalloc_minimal.so 00:18:07.711 1 904 libcrypto.so 00:18:07.711 ----------------------------------------------------- 00:18:07.711 00:18:07.711 00:18:07.711 real 0m14.853s 00:18:07.711 user 0m7.886s 00:18:07.711 sys 0m6.558s 00:18:07.711 18:19:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.711 ************************************ 00:18:07.711 END TEST xnvme_fio_plugin 00:18:07.711 ************************************ 00:18:07.711 18:19:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:07.711 18:19:41 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:07.711 18:19:41 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:07.711 18:19:41 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:07.711 18:19:41 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:07.711 18:19:41 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:07.711 18:19:41 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.711 18:19:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:07.711 ************************************ 00:18:07.711 START TEST xnvme_rpc 00:18:07.711 ************************************ 00:18:07.711 18:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:07.711 18:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:07.711 18:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:07.711 18:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:07.711 18:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:07.711 18:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73060 00:18:07.711 18:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:07.711 18:19:41 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73060 00:18:07.711 18:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73060 ']' 00:18:07.711 18:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.711 18:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.711 18:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.711 18:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.711 18:19:41 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:07.711 [2024-11-26 18:19:42.100104] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:18:07.711 [2024-11-26 18:19:42.100281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73060 ] 00:18:07.969 [2024-11-26 18:19:42.285507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.245 [2024-11-26 18:19:42.470650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.209 xnvme_bdev 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:09.209 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.210 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.210 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.210 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:09.210 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:09.210 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.210 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.210 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.210 18:19:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73060 00:18:09.210 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73060 ']' 00:18:09.210 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73060 00:18:09.210 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:09.210 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:09.210 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73060 00:18:09.468 killing process with pid 73060 00:18:09.468 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:09.468 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:09.468 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73060' 00:18:09.468 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73060 00:18:09.468 18:19:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73060 00:18:11.997 ************************************ 00:18:11.997 END TEST xnvme_rpc 00:18:11.997 ************************************ 00:18:11.997 00:18:11.997 real 0m3.955s 00:18:11.997 user 0m4.159s 00:18:11.997 sys 0m0.627s 00:18:11.997 18:19:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.997 18:19:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:11.997 18:19:45 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:11.997 18:19:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:11.997 18:19:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.997 18:19:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:11.997 ************************************ 00:18:11.997 START TEST xnvme_bdevperf 00:18:11.997 ************************************ 00:18:11.997 18:19:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:11.997 18:19:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:11.997 18:19:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:18:11.997 18:19:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:11.997 18:19:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:11.997 18:19:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:11.997 18:19:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:11.997 18:19:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:11.997 { 00:18:11.997 "subsystems": [ 00:18:11.997 { 00:18:11.997 "subsystem": "bdev", 00:18:11.997 "config": [ 00:18:11.997 { 00:18:11.997 "params": { 00:18:11.997 "io_mechanism": "io_uring_cmd", 00:18:11.997 "conserve_cpu": true, 00:18:11.997 "filename": "/dev/ng0n1", 00:18:11.997 "name": "xnvme_bdev" 00:18:11.997 }, 00:18:11.997 "method": "bdev_xnvme_create" 00:18:11.997 }, 00:18:11.997 { 00:18:11.997 "method": "bdev_wait_for_examine" 00:18:11.997 } 00:18:11.997 ] 00:18:11.997 } 00:18:11.997 ] 00:18:11.997 } 00:18:11.997 [2024-11-26 18:19:46.117331] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:18:11.997 [2024-11-26 18:19:46.117517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73144 ] 00:18:11.997 [2024-11-26 18:19:46.316374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.256 [2024-11-26 18:19:46.488343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.514 Running I/O for 5 seconds... 00:18:14.823 48320.00 IOPS, 188.75 MiB/s [2024-11-26T18:19:50.218Z] 50078.50 IOPS, 195.62 MiB/s [2024-11-26T18:19:51.150Z] 50537.67 IOPS, 197.41 MiB/s [2024-11-26T18:19:52.083Z] 50895.25 IOPS, 198.81 MiB/s [2024-11-26T18:19:52.083Z] 51468.20 IOPS, 201.05 MiB/s 00:18:17.622 Latency(us) 00:18:17.622 [2024-11-26T18:19:52.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.622 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:17.622 xnvme_bdev : 5.01 51427.78 200.89 0.00 0.00 1240.54 808.03 4438.57 00:18:17.622 [2024-11-26T18:19:52.083Z] =================================================================================================================== 00:18:17.622 [2024-11-26T18:19:52.083Z] Total : 51427.78 200.89 0.00 0.00 1240.54 808.03 4438.57 00:18:18.996 18:19:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:18.996 18:19:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:18.996 18:19:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:18.996 18:19:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:18.996 18:19:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:18.996 { 00:18:18.996 "subsystems": [ 00:18:18.996 { 00:18:18.996 "subsystem": "bdev", 00:18:18.996 "config": [ 00:18:18.996 { 00:18:18.996 "params": { 00:18:18.996 "io_mechanism": "io_uring_cmd", 00:18:18.996 "conserve_cpu": true, 00:18:18.996 "filename": "/dev/ng0n1", 00:18:18.996 "name": "xnvme_bdev" 00:18:18.996 }, 00:18:18.996 "method": "bdev_xnvme_create" 00:18:18.996 }, 00:18:18.996 { 00:18:18.996 "method": "bdev_wait_for_examine" 00:18:18.996 } 00:18:18.996 ] 00:18:18.996 } 00:18:18.996 ] 00:18:18.996 } 00:18:18.996 [2024-11-26 18:19:53.145793] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:18:18.996 [2024-11-26 18:19:53.145968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73224 ] 00:18:18.996 [2024-11-26 18:19:53.324801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.254 [2024-11-26 18:19:53.458369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.512 Running I/O for 5 seconds... 00:18:21.418 43680.00 IOPS, 170.62 MiB/s [2024-11-26T18:19:57.253Z] 43728.00 IOPS, 170.81 MiB/s [2024-11-26T18:19:58.187Z] 43616.00 IOPS, 170.38 MiB/s [2024-11-26T18:19:59.121Z] 43832.00 IOPS, 171.22 MiB/s [2024-11-26T18:19:59.121Z] 43833.60 IOPS, 171.22 MiB/s 00:18:24.660 Latency(us) 00:18:24.660 [2024-11-26T18:19:59.121Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.660 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:24.660 xnvme_bdev : 5.01 43797.71 171.08 0.00 0.00 1456.13 655.36 6374.87 00:18:24.660 [2024-11-26T18:19:59.121Z] =================================================================================================================== 00:18:24.660 [2024-11-26T18:19:59.121Z] Total : 43797.71 171.08 0.00 0.00 1456.13 655.36 6374.87 00:18:25.591 18:19:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:25.591 18:19:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:18:25.591 18:19:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:25.591 18:19:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:25.591 18:19:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:25.591 { 00:18:25.591 "subsystems": [ 00:18:25.591 { 00:18:25.591 "subsystem": "bdev", 00:18:25.591 "config": [ 00:18:25.591 { 00:18:25.591 "params": { 00:18:25.591 "io_mechanism": "io_uring_cmd", 00:18:25.591 "conserve_cpu": true, 00:18:25.591 "filename": "/dev/ng0n1", 00:18:25.591 "name": "xnvme_bdev" 00:18:25.591 }, 00:18:25.591 "method": "bdev_xnvme_create" 00:18:25.591 }, 00:18:25.591 { 00:18:25.591 "method": "bdev_wait_for_examine" 00:18:25.591 } 00:18:25.591 ] 00:18:25.591 } 00:18:25.591 ] 00:18:25.591 } 00:18:25.849 [2024-11-26 18:20:00.071400] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:18:25.849 [2024-11-26 18:20:00.071638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73294 ] 00:18:25.849 [2024-11-26 18:20:00.265005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.108 [2024-11-26 18:20:00.440507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.366 Running I/O for 5 seconds... 00:18:28.670 71424.00 IOPS, 279.00 MiB/s [2024-11-26T18:20:04.061Z] 69888.00 IOPS, 273.00 MiB/s [2024-11-26T18:20:04.995Z] 70720.00 IOPS, 276.25 MiB/s [2024-11-26T18:20:05.928Z] 71520.00 IOPS, 279.38 MiB/s 00:18:31.467 Latency(us) 00:18:31.467 [2024-11-26T18:20:05.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.467 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:18:31.467 xnvme_bdev : 5.00 71968.66 281.13 0.00 0.00 885.39 480.35 5928.03 00:18:31.467 [2024-11-26T18:20:05.928Z] =================================================================================================================== 00:18:31.467 [2024-11-26T18:20:05.928Z] Total : 71968.66 281.13 0.00 0.00 885.39 480.35 5928.03 00:18:32.841 18:20:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:32.841 18:20:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:32.841 18:20:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:18:32.841 18:20:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:32.841 18:20:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:32.841 { 00:18:32.841 "subsystems": [ 00:18:32.841 { 00:18:32.841 "subsystem": "bdev", 00:18:32.841 "config": [ 00:18:32.841 { 00:18:32.841 "params": { 00:18:32.841 "io_mechanism": "io_uring_cmd", 00:18:32.841 "conserve_cpu": true, 00:18:32.841 "filename": "/dev/ng0n1", 00:18:32.841 "name": "xnvme_bdev" 00:18:32.841 }, 00:18:32.841 "method": "bdev_xnvme_create" 00:18:32.841 }, 00:18:32.841 { 00:18:32.841 "method": "bdev_wait_for_examine" 00:18:32.841 } 00:18:32.841 ] 00:18:32.841 } 00:18:32.841 ] 00:18:32.841 } 00:18:32.841 [2024-11-26 18:20:06.977452] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:18:32.841 [2024-11-26 18:20:06.977679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73374 ] 00:18:32.841 [2024-11-26 18:20:07.165255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.100 [2024-11-26 18:20:07.300704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.358 Running I/O for 5 seconds... 00:18:35.231 38617.00 IOPS, 150.85 MiB/s [2024-11-26T18:20:11.065Z] 39011.50 IOPS, 152.39 MiB/s [2024-11-26T18:20:11.998Z] 38641.67 IOPS, 150.94 MiB/s [2024-11-26T18:20:12.931Z] 34942.75 IOPS, 136.50 MiB/s 00:18:38.470 Latency(us) 00:18:38.470 [2024-11-26T18:20:12.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.470 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:18:38.470 xnvme_bdev : 5.00 33864.64 132.28 0.00 0.00 1881.72 64.70 55050.24 00:18:38.470 [2024-11-26T18:20:12.931Z] =================================================================================================================== 00:18:38.470 [2024-11-26T18:20:12.931Z] Total : 33864.64 132.28 0.00 0.00 1881.72 64.70 55050.24 00:18:39.405 00:18:39.405 real 0m27.695s 00:18:39.405 user 0m19.005s 00:18:39.405 sys 0m6.574s 00:18:39.405 18:20:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.405 18:20:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:39.405 ************************************ 00:18:39.405 END TEST xnvme_bdevperf 00:18:39.405 ************************************ 00:18:39.405 18:20:13 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:39.405 18:20:13 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:39.405 18:20:13 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.405 18:20:13 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:39.405 ************************************ 00:18:39.405 START TEST xnvme_fio_plugin 00:18:39.405 ************************************ 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:39.405 18:20:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:39.405 { 00:18:39.405 "subsystems": [ 00:18:39.405 { 00:18:39.405 "subsystem": "bdev", 00:18:39.405 "config": [ 00:18:39.405 { 00:18:39.405 "params": { 00:18:39.405 "io_mechanism": "io_uring_cmd", 00:18:39.405 "conserve_cpu": true, 00:18:39.405 "filename": "/dev/ng0n1", 00:18:39.405 "name": "xnvme_bdev" 00:18:39.405 }, 00:18:39.405 "method": "bdev_xnvme_create" 00:18:39.405 }, 00:18:39.405 { 00:18:39.405 "method": "bdev_wait_for_examine" 00:18:39.405 } 00:18:39.405 ] 00:18:39.405 } 00:18:39.405 ] 00:18:39.405 } 00:18:39.664 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:39.664 fio-3.35 00:18:39.664 Starting 1 thread 00:18:46.226 00:18:46.226 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73498: Tue Nov 26 18:20:19 2024 00:18:46.226 read: IOPS=48.8k, BW=191MiB/s (200MB/s)(953MiB/5002msec) 00:18:46.226 slat (nsec): min=2742, max=48529, avg=4111.42, stdev=1679.01 00:18:46.226 clat (usec): min=755, max=4226, avg=1149.16, stdev=172.44 00:18:46.226 lat (usec): min=759, max=4232, avg=1153.27, stdev=172.82 00:18:46.226 clat percentiles (usec): 00:18:46.226 | 1.00th=[ 873], 5.00th=[ 938], 10.00th=[ 979], 20.00th=[ 1029], 00:18:46.226 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1172], 00:18:46.226 | 70.00th=[ 1205], 80.00th=[ 1254], 90.00th=[ 1319], 95.00th=[ 1385], 00:18:46.226 | 99.00th=[ 1713], 99.50th=[ 1876], 99.90th=[ 2802], 99.95th=[ 3032], 00:18:46.226 | 99.99th=[ 4113] 00:18:46.226 bw ( KiB/s): min=184320, max=210432, per=100.00%, avg=195128.00, stdev=9331.26, samples=9 00:18:46.226 iops : min=46080, max=52608, avg=48782.00, stdev=2332.81, samples=9 00:18:46.226 lat (usec) : 1000=13.60% 00:18:46.226 lat (msec) : 2=86.03%, 4=0.36%, 10=0.02% 00:18:46.226 cpu : usr=67.39%, sys=29.63%, ctx=10, majf=0, minf=762 00:18:46.226 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:46.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:46.226 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:46.226 issued rwts: total=243967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:46.226 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:46.226 00:18:46.226 Run status group 0 (all jobs): 00:18:46.226 READ: bw=191MiB/s (200MB/s), 191MiB/s-191MiB/s (200MB/s-200MB/s), io=953MiB (999MB), run=5002-5002msec 00:18:47.158 ----------------------------------------------------- 00:18:47.158 Suppressions used: 00:18:47.158 count bytes template 00:18:47.158 1 11 /usr/src/fio/parse.c 00:18:47.158 1 8 libtcmalloc_minimal.so 00:18:47.158 1 904 libcrypto.so 00:18:47.158 ----------------------------------------------------- 00:18:47.158 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:47.158 18:20:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:47.158 { 00:18:47.158 "subsystems": [ 00:18:47.158 { 00:18:47.158 "subsystem": "bdev", 00:18:47.158 "config": [ 00:18:47.158 { 00:18:47.158 "params": { 00:18:47.158 "io_mechanism": "io_uring_cmd", 00:18:47.158 "conserve_cpu": true, 00:18:47.158 "filename": "/dev/ng0n1", 00:18:47.158 "name": "xnvme_bdev" 00:18:47.159 }, 00:18:47.159 "method": "bdev_xnvme_create" 00:18:47.159 }, 00:18:47.159 { 00:18:47.159 "method": "bdev_wait_for_examine" 00:18:47.159 } 00:18:47.159 ] 00:18:47.159 } 00:18:47.159 ] 00:18:47.159 } 00:18:47.416 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:47.416 fio-3.35 00:18:47.416 Starting 1 thread 00:18:53.977 00:18:53.977 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73589: Tue Nov 26 18:20:27 2024 00:18:53.977 write: IOPS=43.4k, BW=170MiB/s (178MB/s)(849MiB/5001msec); 0 zone resets 00:18:53.977 slat (nsec): min=2855, max=85302, avg=5071.87, stdev=2507.77 00:18:53.977 clat (usec): min=425, max=4015, avg=1264.77, stdev=173.23 00:18:53.977 lat (usec): min=429, max=4020, avg=1269.85, stdev=173.93 00:18:53.977 clat percentiles (usec): 00:18:53.977 | 1.00th=[ 971], 5.00th=[ 1037], 10.00th=[ 1074], 20.00th=[ 1123], 00:18:53.977 | 30.00th=[ 1172], 40.00th=[ 1205], 50.00th=[ 1237], 60.00th=[ 1287], 00:18:53.977 | 70.00th=[ 1319], 80.00th=[ 1385], 90.00th=[ 1483], 95.00th=[ 1598], 00:18:53.977 | 99.00th=[ 1811], 99.50th=[ 1876], 99.90th=[ 1975], 99.95th=[ 2040], 00:18:53.977 | 99.99th=[ 2606] 00:18:53.977 bw ( KiB/s): min=166400, max=181248, per=99.91%, avg=173620.67, stdev=4910.28, samples=9 00:18:53.977 iops : min=41600, max=45312, avg=43405.33, stdev=1227.21, samples=9 00:18:53.977 lat (usec) : 500=0.01%, 750=0.01%, 1000=2.27% 00:18:53.977 lat (msec) : 2=97.64%, 4=0.08%, 10=0.01% 00:18:53.977 cpu : usr=68.78%, sys=28.20%, ctx=11, majf=0, minf=763 00:18:53.977 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:53.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.977 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:18:53.977 issued rwts: total=0,217254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.977 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:53.977 00:18:53.977 Run status group 0 (all jobs): 00:18:53.977 WRITE: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=849MiB (890MB), run=5001-5001msec 00:18:54.543 ----------------------------------------------------- 00:18:54.543 Suppressions used: 00:18:54.543 count bytes template 00:18:54.543 1 11 /usr/src/fio/parse.c 00:18:54.543 1 8 libtcmalloc_minimal.so 00:18:54.543 1 904 libcrypto.so 00:18:54.543 ----------------------------------------------------- 00:18:54.543 00:18:54.543 00:18:54.543 real 0m15.025s 00:18:54.543 user 0m10.766s 00:18:54.543 sys 0m3.667s 00:18:54.543 18:20:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.543 18:20:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:54.544 ************************************ 00:18:54.544 END TEST xnvme_fio_plugin 00:18:54.544 ************************************ 00:18:54.544 18:20:28 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73060 00:18:54.544 18:20:28 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73060 ']' 00:18:54.544 18:20:28 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73060 00:18:54.544 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73060) - No such process 00:18:54.544 Process with pid 73060 is not found 00:18:54.544 18:20:28 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73060 is not found' 00:18:54.544 ************************************ 00:18:54.544 END TEST nvme_xnvme 00:18:54.544 ************************************ 00:18:54.544 00:18:54.544 real 3m50.006s 00:18:54.544 user 2m11.918s 00:18:54.544 sys 1m21.764s 00:18:54.544 18:20:28 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.544 18:20:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:54.544 18:20:28 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:18:54.544 18:20:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:54.544 18:20:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:54.544 18:20:28 -- common/autotest_common.sh@10 -- # set +x 00:18:54.544 ************************************ 00:18:54.544 START TEST blockdev_xnvme 00:18:54.544 ************************************ 00:18:54.544 18:20:28 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:18:54.544 * Looking for test storage... 00:18:54.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:54.544 18:20:28 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:54.544 18:20:28 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:18:54.544 18:20:28 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:54.803 18:20:29 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:54.803 18:20:29 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:18:54.803 18:20:29 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:54.803 18:20:29 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:54.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.803 --rc genhtml_branch_coverage=1 00:18:54.803 --rc genhtml_function_coverage=1 00:18:54.803 --rc genhtml_legend=1 00:18:54.803 --rc geninfo_all_blocks=1 00:18:54.803 --rc geninfo_unexecuted_blocks=1 00:18:54.803 00:18:54.803 ' 00:18:54.803 18:20:29 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:54.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.803 --rc genhtml_branch_coverage=1 00:18:54.803 --rc genhtml_function_coverage=1 00:18:54.803 --rc genhtml_legend=1 00:18:54.803 --rc geninfo_all_blocks=1 00:18:54.803 --rc geninfo_unexecuted_blocks=1 00:18:54.803 00:18:54.803 ' 00:18:54.803 18:20:29 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:54.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.803 --rc genhtml_branch_coverage=1 00:18:54.803 --rc genhtml_function_coverage=1 00:18:54.803 --rc genhtml_legend=1 00:18:54.803 --rc geninfo_all_blocks=1 00:18:54.803 --rc geninfo_unexecuted_blocks=1 00:18:54.803 00:18:54.803 ' 00:18:54.803 18:20:29 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:54.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:54.803 --rc genhtml_branch_coverage=1 00:18:54.803 --rc genhtml_function_coverage=1 00:18:54.803 --rc genhtml_legend=1 00:18:54.803 --rc geninfo_all_blocks=1 00:18:54.803 --rc geninfo_unexecuted_blocks=1 00:18:54.803 00:18:54.803 ' 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73730 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:54.803 18:20:29 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73730 00:18:54.803 18:20:29 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73730 ']' 00:18:54.803 18:20:29 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.803 18:20:29 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.803 18:20:29 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.803 18:20:29 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.803 18:20:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:54.803 [2024-11-26 18:20:29.216421] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:18:54.803 [2024-11-26 18:20:29.216661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73730 ] 00:18:55.061 [2024-11-26 18:20:29.411283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.320 [2024-11-26 18:20:29.562478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.255 18:20:30 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:56.255 18:20:30 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:18:56.255 18:20:30 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:18:56.255 18:20:30 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:18:56.255 18:20:30 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:18:56.255 18:20:30 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:18:56.255 18:20:30 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:56.513 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:57.078 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:18:57.078 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:18:57.078 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:18:57.078 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:18:57.337 18:20:31 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:57.337 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:18:57.338 nvme0n1 00:18:57.338 nvme0n2 00:18:57.338 nvme0n3 00:18:57.338 nvme1n1 00:18:57.338 nvme2n1 00:18:57.338 nvme3n1 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:57.338 18:20:31 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:18:57.338 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c57e8fcf-ae55-4dcb-9ce3-662d60d8147a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c57e8fcf-ae55-4dcb-9ce3-662d60d8147a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "09e9b5f1-c6ab-4fa4-95ce-87ca1b0daada"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "09e9b5f1-c6ab-4fa4-95ce-87ca1b0daada",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "a0a9cf08-a343-462d-aafc-8ff4eb9a5ba7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a0a9cf08-a343-462d-aafc-8ff4eb9a5ba7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "e779a42a-d46b-4974-b4e0-a481cca584f2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e779a42a-d46b-4974-b4e0-a481cca584f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "420a00f6-fc1d-4678-b2fc-eac072373a75"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "420a00f6-fc1d-4678-b2fc-eac072373a75",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "69a2f233-9f4d-47db-b897-f3940d90521d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "69a2f233-9f4d-47db-b897-f3940d90521d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:18:57.596 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:18:57.596 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:18:57.596 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:18:57.596 18:20:31 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73730 00:18:57.596 18:20:31 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73730 ']' 00:18:57.596 18:20:31 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73730 00:18:57.596 18:20:31 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:18:57.596 18:20:31 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.596 18:20:31 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73730 00:18:57.596 killing process with pid 73730 00:18:57.596 18:20:31 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:57.596 18:20:31 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:57.596 18:20:31 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73730' 00:18:57.596 18:20:31 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73730 00:18:57.596 18:20:31 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73730 00:19:00.122 18:20:34 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:00.122 18:20:34 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:00.122 18:20:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:00.122 18:20:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.122 18:20:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:00.122 ************************************ 00:19:00.122 START TEST bdev_hello_world 00:19:00.122 ************************************ 00:19:00.122 18:20:34 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:00.122 [2024-11-26 18:20:34.116522] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:19:00.122 [2024-11-26 18:20:34.116710] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74020 ] 00:19:00.122 [2024-11-26 18:20:34.286183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.122 [2024-11-26 18:20:34.404663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.379 [2024-11-26 18:20:34.835798] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:00.379 [2024-11-26 18:20:34.835865] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:19:00.379 [2024-11-26 18:20:34.835890] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:00.635 [2024-11-26 18:20:34.838412] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:00.635 [2024-11-26 18:20:34.838772] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:00.635 [2024-11-26 18:20:34.838801] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:00.635 [2024-11-26 18:20:34.839010] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:00.635 00:19:00.635 [2024-11-26 18:20:34.839043] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:01.567 00:19:01.567 real 0m1.847s 00:19:01.567 user 0m1.478s 00:19:01.567 sys 0m0.253s 00:19:01.567 18:20:35 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.567 18:20:35 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:01.567 ************************************ 00:19:01.567 END TEST bdev_hello_world 00:19:01.568 ************************************ 00:19:01.568 18:20:35 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:01.568 18:20:35 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:01.568 18:20:35 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.568 18:20:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:01.568 ************************************ 00:19:01.568 START TEST bdev_bounds 00:19:01.568 ************************************ 00:19:01.568 18:20:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:01.568 Process bdevio pid: 74055 00:19:01.568 18:20:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74055 00:19:01.568 18:20:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:01.568 18:20:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:01.568 18:20:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74055' 00:19:01.568 18:20:35 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74055 00:19:01.568 18:20:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74055 ']' 00:19:01.568 18:20:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.568 18:20:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.568 18:20:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.568 18:20:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.568 18:20:35 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:01.826 [2024-11-26 18:20:36.036108] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:19:01.826 [2024-11-26 18:20:36.036469] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74055 ] 00:19:01.826 [2024-11-26 18:20:36.229019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:02.084 [2024-11-26 18:20:36.390099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.084 [2024-11-26 18:20:36.390221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.084 [2024-11-26 18:20:36.390235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.650 18:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.650 18:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:02.650 18:20:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:02.908 I/O targets: 00:19:02.908 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:02.908 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:02.908 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:02.908 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:19:02.908 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:19:02.908 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:19:02.908 00:19:02.908 00:19:02.908 CUnit - A unit testing framework for C - Version 2.1-3 00:19:02.908 http://cunit.sourceforge.net/ 00:19:02.908 00:19:02.908 00:19:02.908 Suite: bdevio tests on: nvme3n1 00:19:02.908 Test: blockdev write read block ...passed 00:19:02.908 Test: blockdev write zeroes read block ...passed 00:19:02.908 Test: blockdev write zeroes read no split ...passed 00:19:02.908 Test: blockdev write zeroes read split ...passed 00:19:02.908 Test: blockdev write zeroes read split partial ...passed 00:19:02.908 Test: blockdev reset ...passed 00:19:02.908 Test: blockdev write read 8 blocks ...passed 00:19:02.908 Test: blockdev write read size > 128k ...passed 00:19:02.908 Test: blockdev write read invalid size ...passed 00:19:02.908 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:02.908 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:02.908 Test: blockdev write read max offset ...passed 00:19:02.908 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:02.908 Test: blockdev writev readv 8 blocks ...passed 00:19:02.908 Test: blockdev writev readv 30 x 1block ...passed 00:19:02.908 Test: blockdev writev readv block ...passed 00:19:02.908 Test: blockdev writev readv size > 128k ...passed 00:19:02.908 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:02.908 Test: blockdev comparev and writev ...passed 00:19:02.908 Test: blockdev nvme passthru rw ...passed 00:19:02.908 Test: blockdev nvme passthru vendor specific ...passed 00:19:02.908 Test: blockdev nvme admin passthru ...passed 00:19:02.908 Test: blockdev copy ...passed 00:19:02.908 Suite: bdevio tests on: nvme2n1 00:19:02.908 Test: blockdev write read block ...passed 00:19:02.908 Test: blockdev write zeroes read block ...passed 00:19:02.908 Test: blockdev write zeroes read no split ...passed 00:19:02.908 Test: blockdev write zeroes read split ...passed 00:19:02.908 Test: blockdev write zeroes read split partial ...passed 00:19:02.908 Test: blockdev reset ...passed 00:19:02.908 Test: blockdev write read 8 blocks ...passed 00:19:02.908 Test: blockdev write read size > 128k ...passed 00:19:02.908 Test: blockdev write read invalid size ...passed 00:19:02.908 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:02.908 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:02.908 Test: blockdev write read max offset ...passed 00:19:02.908 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:02.908 Test: blockdev writev readv 8 blocks ...passed 00:19:02.908 Test: blockdev writev readv 30 x 1block ...passed 00:19:02.908 Test: blockdev writev readv block ...passed 00:19:02.908 Test: blockdev writev readv size > 128k ...passed 00:19:02.908 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:02.908 Test: blockdev comparev and writev ...passed 00:19:02.908 Test: blockdev nvme passthru rw ...passed 00:19:02.908 Test: blockdev nvme passthru vendor specific ...passed 00:19:02.908 Test: blockdev nvme admin passthru ...passed 00:19:02.908 Test: blockdev copy ...passed 00:19:02.908 Suite: bdevio tests on: nvme1n1 00:19:02.908 Test: blockdev write read block ...passed 00:19:02.908 Test: blockdev write zeroes read block ...passed 00:19:02.908 Test: blockdev write zeroes read no split ...passed 00:19:02.908 Test: blockdev write zeroes read split ...passed 00:19:03.166 Test: blockdev write zeroes read split partial ...passed 00:19:03.167 Test: blockdev reset ...passed 00:19:03.167 Test: blockdev write read 8 blocks ...passed 00:19:03.167 Test: blockdev write read size > 128k ...passed 00:19:03.167 Test: blockdev write read invalid size ...passed 00:19:03.167 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:03.167 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:03.167 Test: blockdev write read max offset ...passed 00:19:03.167 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:03.167 Test: blockdev writev readv 8 blocks ...passed 00:19:03.167 Test: blockdev writev readv 30 x 1block ...passed 00:19:03.167 Test: blockdev writev readv block ...passed 00:19:03.167 Test: blockdev writev readv size > 128k ...passed 00:19:03.167 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:03.167 Test: blockdev comparev and writev ...passed 00:19:03.167 Test: blockdev nvme passthru rw ...passed 00:19:03.167 Test: blockdev nvme passthru vendor specific ...passed 00:19:03.167 Test: blockdev nvme admin passthru ...passed 00:19:03.167 Test: blockdev copy ...passed 00:19:03.167 Suite: bdevio tests on: nvme0n3 00:19:03.167 Test: blockdev write read block ...passed 00:19:03.167 Test: blockdev write zeroes read block ...passed 00:19:03.167 Test: blockdev write zeroes read no split ...passed 00:19:03.167 Test: blockdev write zeroes read split ...passed 00:19:03.167 Test: blockdev write zeroes read split partial ...passed 00:19:03.167 Test: blockdev reset ...passed 00:19:03.167 Test: blockdev write read 8 blocks ...passed 00:19:03.167 Test: blockdev write read size > 128k ...passed 00:19:03.167 Test: blockdev write read invalid size ...passed 00:19:03.167 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:03.167 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:03.167 Test: blockdev write read max offset ...passed 00:19:03.167 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:03.167 Test: blockdev writev readv 8 blocks ...passed 00:19:03.167 Test: blockdev writev readv 30 x 1block ...passed 00:19:03.167 Test: blockdev writev readv block ...passed 00:19:03.167 Test: blockdev writev readv size > 128k ...passed 00:19:03.167 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:03.167 Test: blockdev comparev and writev ...passed 00:19:03.167 Test: blockdev nvme passthru rw ...passed 00:19:03.167 Test: blockdev nvme passthru vendor specific ...passed 00:19:03.167 Test: blockdev nvme admin passthru ...passed 00:19:03.167 Test: blockdev copy ...passed 00:19:03.167 Suite: bdevio tests on: nvme0n2 00:19:03.167 Test: blockdev write read block ...passed 00:19:03.167 Test: blockdev write zeroes read block ...passed 00:19:03.167 Test: blockdev write zeroes read no split ...passed 00:19:03.167 Test: blockdev write zeroes read split ...passed 00:19:03.167 Test: blockdev write zeroes read split partial ...passed 00:19:03.167 Test: blockdev reset ...passed 00:19:03.167 Test: blockdev write read 8 blocks ...passed 00:19:03.167 Test: blockdev write read size > 128k ...passed 00:19:03.167 Test: blockdev write read invalid size ...passed 00:19:03.167 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:03.167 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:03.167 Test: blockdev write read max offset ...passed 00:19:03.167 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:03.167 Test: blockdev writev readv 8 blocks ...passed 00:19:03.167 Test: blockdev writev readv 30 x 1block ...passed 00:19:03.167 Test: blockdev writev readv block ...passed 00:19:03.167 Test: blockdev writev readv size > 128k ...passed 00:19:03.167 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:03.167 Test: blockdev comparev and writev ...passed 00:19:03.167 Test: blockdev nvme passthru rw ...passed 00:19:03.167 Test: blockdev nvme passthru vendor specific ...passed 00:19:03.167 Test: blockdev nvme admin passthru ...passed 00:19:03.167 Test: blockdev copy ...passed 00:19:03.167 Suite: bdevio tests on: nvme0n1 00:19:03.167 Test: blockdev write read block ...passed 00:19:03.167 Test: blockdev write zeroes read block ...passed 00:19:03.167 Test: blockdev write zeroes read no split ...passed 00:19:03.167 Test: blockdev write zeroes read split ...passed 00:19:03.426 Test: blockdev write zeroes read split partial ...passed 00:19:03.426 Test: blockdev reset ...passed 00:19:03.426 Test: blockdev write read 8 blocks ...passed 00:19:03.426 Test: blockdev write read size > 128k ...passed 00:19:03.426 Test: blockdev write read invalid size ...passed 00:19:03.426 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:03.426 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:03.426 Test: blockdev write read max offset ...passed 00:19:03.426 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:03.426 Test: blockdev writev readv 8 blocks ...passed 00:19:03.426 Test: blockdev writev readv 30 x 1block ...passed 00:19:03.426 Test: blockdev writev readv block ...passed 00:19:03.426 Test: blockdev writev readv size > 128k ...passed 00:19:03.426 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:03.426 Test: blockdev comparev and writev ...passed 00:19:03.426 Test: blockdev nvme passthru rw ...passed 00:19:03.426 Test: blockdev nvme passthru vendor specific ...passed 00:19:03.426 Test: blockdev nvme admin passthru ...passed 00:19:03.426 Test: blockdev copy ...passed 00:19:03.426 00:19:03.426 Run Summary: Type Total Ran Passed Failed Inactive 00:19:03.426 suites 6 6 n/a 0 0 00:19:03.426 tests 138 138 138 0 0 00:19:03.426 asserts 780 780 780 0 n/a 00:19:03.426 00:19:03.426 Elapsed time = 1.345 seconds 00:19:03.426 0 00:19:03.426 18:20:37 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74055 00:19:03.426 18:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74055 ']' 00:19:03.426 18:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74055 00:19:03.426 18:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:03.426 18:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.426 18:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74055 00:19:03.426 killing process with pid 74055 00:19:03.426 18:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:03.426 18:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:03.426 18:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74055' 00:19:03.426 18:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74055 00:19:03.426 18:20:37 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74055 00:19:04.360 18:20:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:04.360 00:19:04.360 real 0m2.864s 00:19:04.360 user 0m7.185s 00:19:04.360 sys 0m0.432s 00:19:04.360 18:20:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.360 ************************************ 00:19:04.360 END TEST bdev_bounds 00:19:04.360 ************************************ 00:19:04.360 18:20:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:04.618 18:20:38 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:19:04.618 18:20:38 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:04.618 18:20:38 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.618 18:20:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:04.618 ************************************ 00:19:04.618 START TEST bdev_nbd 00:19:04.618 ************************************ 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74118 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74118 /var/tmp/spdk-nbd.sock 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74118 ']' 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:04.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.618 18:20:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:04.618 [2024-11-26 18:20:38.969947] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:19:04.618 [2024-11-26 18:20:38.970478] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.876 [2024-11-26 18:20:39.159794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.876 [2024-11-26 18:20:39.295030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.811 18:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.811 18:20:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:05.811 18:20:39 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:19:05.811 18:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.811 18:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:05.811 18:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:05.811 18:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:19:05.811 18:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:05.811 18:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:05.811 18:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:05.811 18:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:05.811 18:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:05.811 18:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:05.811 18:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:05.811 18:20:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:19:05.811 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:05.811 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:06.069 1+0 records in 00:19:06.069 1+0 records out 00:19:06.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000805586 s, 5.1 MB/s 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:06.069 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:06.327 1+0 records in 00:19:06.327 1+0 records out 00:19:06.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575272 s, 7.1 MB/s 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:06.327 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:06.585 1+0 records in 00:19:06.585 1+0 records out 00:19:06.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000690786 s, 5.9 MB/s 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:06.585 18:20:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:19:06.843 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:07.102 1+0 records in 00:19:07.102 1+0 records out 00:19:07.102 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000780371 s, 5.2 MB/s 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:07.102 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:07.360 1+0 records in 00:19:07.360 1+0 records out 00:19:07.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000670245 s, 6.1 MB/s 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:07.360 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:07.618 1+0 records in 00:19:07.618 1+0 records out 00:19:07.618 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000885574 s, 4.6 MB/s 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:07.618 18:20:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:07.876 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:07.876 { 00:19:07.876 "nbd_device": "/dev/nbd0", 00:19:07.876 "bdev_name": "nvme0n1" 00:19:07.876 }, 00:19:07.876 { 00:19:07.876 "nbd_device": "/dev/nbd1", 00:19:07.876 "bdev_name": "nvme0n2" 00:19:07.876 }, 00:19:07.876 { 00:19:07.876 "nbd_device": "/dev/nbd2", 00:19:07.876 "bdev_name": "nvme0n3" 00:19:07.876 }, 00:19:07.876 { 00:19:07.876 "nbd_device": "/dev/nbd3", 00:19:07.876 "bdev_name": "nvme1n1" 00:19:07.876 }, 00:19:07.876 { 00:19:07.876 "nbd_device": "/dev/nbd4", 00:19:07.876 "bdev_name": "nvme2n1" 00:19:07.876 }, 00:19:07.876 { 00:19:07.876 "nbd_device": "/dev/nbd5", 00:19:07.876 "bdev_name": "nvme3n1" 00:19:07.876 } 00:19:07.876 ]' 00:19:07.876 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:07.876 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:07.876 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:07.876 { 00:19:07.876 "nbd_device": "/dev/nbd0", 00:19:07.876 "bdev_name": "nvme0n1" 00:19:07.876 }, 00:19:07.876 { 00:19:07.876 "nbd_device": "/dev/nbd1", 00:19:07.876 "bdev_name": "nvme0n2" 00:19:07.876 }, 00:19:07.876 { 00:19:07.876 "nbd_device": "/dev/nbd2", 00:19:07.876 "bdev_name": "nvme0n3" 00:19:07.876 }, 00:19:07.876 { 00:19:07.876 "nbd_device": "/dev/nbd3", 00:19:07.876 "bdev_name": "nvme1n1" 00:19:07.876 }, 00:19:07.876 { 00:19:07.876 "nbd_device": "/dev/nbd4", 00:19:07.876 "bdev_name": "nvme2n1" 00:19:07.876 }, 00:19:07.876 { 00:19:07.876 "nbd_device": "/dev/nbd5", 00:19:07.876 "bdev_name": "nvme3n1" 00:19:07.876 } 00:19:07.876 ]' 00:19:07.876 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:19:07.876 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:07.876 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:19:07.876 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:07.876 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:07.876 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:07.876 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:08.443 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:08.443 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:08.443 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:08.443 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:08.443 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:08.443 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:08.443 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:08.443 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:08.443 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:08.443 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:08.701 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:08.701 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:08.701 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:08.701 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:08.701 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:08.701 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:08.701 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:08.701 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:08.701 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:08.701 18:20:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:19:08.959 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:19:08.959 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:19:08.959 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:19:08.959 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:08.959 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:08.959 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:19:08.959 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:08.959 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:08.959 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:08.959 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:19:09.217 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:19:09.217 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:19:09.217 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:19:09.217 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.217 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.217 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:19:09.217 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:09.217 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.217 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.217 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:19:09.474 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:19:09.474 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:19:09.474 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:19:09.474 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.474 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.474 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:19:09.474 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:09.474 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.474 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.474 18:20:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:19:09.732 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:19:09.732 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:19:09.732 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:19:09.732 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.732 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.732 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:19:09.732 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:09.732 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.732 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:09.732 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.732 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:09.989 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:19:10.553 /dev/nbd0 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:10.553 1+0 records in 00:19:10.553 1+0 records out 00:19:10.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000885319 s, 4.6 MB/s 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:10.553 18:20:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:19:10.812 /dev/nbd1 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:10.812 1+0 records in 00:19:10.812 1+0 records out 00:19:10.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508878 s, 8.0 MB/s 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:10.812 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:19:11.069 /dev/nbd10 00:19:11.069 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:19:11.069 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:19:11.069 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:19:11.069 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:11.069 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:11.069 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:11.070 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:19:11.070 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:11.070 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:11.070 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:11.070 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:11.070 1+0 records in 00:19:11.070 1+0 records out 00:19:11.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662661 s, 6.2 MB/s 00:19:11.070 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.070 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:11.070 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.070 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:11.070 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:11.070 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:11.070 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:11.070 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:19:11.327 /dev/nbd11 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:11.327 1+0 records in 00:19:11.327 1+0 records out 00:19:11.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000992905 s, 4.1 MB/s 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:11.327 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:19:11.585 /dev/nbd12 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:11.585 1+0 records in 00:19:11.585 1+0 records out 00:19:11.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000748107 s, 5.5 MB/s 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:11.585 18:20:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:19:11.842 /dev/nbd13 00:19:11.842 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:19:11.842 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:19:11.842 18:20:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:19:11.842 18:20:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:12.100 18:20:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:12.100 18:20:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:12.100 18:20:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:19:12.100 18:20:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:12.100 18:20:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:12.100 18:20:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:12.100 18:20:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:12.100 1+0 records in 00:19:12.100 1+0 records out 00:19:12.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735888 s, 5.6 MB/s 00:19:12.100 18:20:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:12.100 18:20:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:12.100 18:20:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:12.100 18:20:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:12.100 18:20:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:12.100 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:12.100 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:12.100 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:12.100 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:12.100 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:12.391 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:12.391 { 00:19:12.391 "nbd_device": "/dev/nbd0", 00:19:12.391 "bdev_name": "nvme0n1" 00:19:12.391 }, 00:19:12.391 { 00:19:12.391 "nbd_device": "/dev/nbd1", 00:19:12.391 "bdev_name": "nvme0n2" 00:19:12.391 }, 00:19:12.391 { 00:19:12.391 "nbd_device": "/dev/nbd10", 00:19:12.391 "bdev_name": "nvme0n3" 00:19:12.391 }, 00:19:12.392 { 00:19:12.392 "nbd_device": "/dev/nbd11", 00:19:12.392 "bdev_name": "nvme1n1" 00:19:12.392 }, 00:19:12.392 { 00:19:12.392 "nbd_device": "/dev/nbd12", 00:19:12.392 "bdev_name": "nvme2n1" 00:19:12.392 }, 00:19:12.392 { 00:19:12.392 "nbd_device": "/dev/nbd13", 00:19:12.392 "bdev_name": "nvme3n1" 00:19:12.392 } 00:19:12.392 ]' 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:12.392 { 00:19:12.392 "nbd_device": "/dev/nbd0", 00:19:12.392 "bdev_name": "nvme0n1" 00:19:12.392 }, 00:19:12.392 { 00:19:12.392 "nbd_device": "/dev/nbd1", 00:19:12.392 "bdev_name": "nvme0n2" 00:19:12.392 }, 00:19:12.392 { 00:19:12.392 "nbd_device": "/dev/nbd10", 00:19:12.392 "bdev_name": "nvme0n3" 00:19:12.392 }, 00:19:12.392 { 00:19:12.392 "nbd_device": "/dev/nbd11", 00:19:12.392 "bdev_name": "nvme1n1" 00:19:12.392 }, 00:19:12.392 { 00:19:12.392 "nbd_device": "/dev/nbd12", 00:19:12.392 "bdev_name": "nvme2n1" 00:19:12.392 }, 00:19:12.392 { 00:19:12.392 "nbd_device": "/dev/nbd13", 00:19:12.392 "bdev_name": "nvme3n1" 00:19:12.392 } 00:19:12.392 ]' 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:12.392 /dev/nbd1 00:19:12.392 /dev/nbd10 00:19:12.392 /dev/nbd11 00:19:12.392 /dev/nbd12 00:19:12.392 /dev/nbd13' 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:12.392 /dev/nbd1 00:19:12.392 /dev/nbd10 00:19:12.392 /dev/nbd11 00:19:12.392 /dev/nbd12 00:19:12.392 /dev/nbd13' 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:12.392 256+0 records in 00:19:12.392 256+0 records out 00:19:12.392 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0085179 s, 123 MB/s 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:12.392 256+0 records in 00:19:12.392 256+0 records out 00:19:12.392 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133618 s, 7.8 MB/s 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:12.392 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:12.649 256+0 records in 00:19:12.649 256+0 records out 00:19:12.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14537 s, 7.2 MB/s 00:19:12.649 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:12.650 18:20:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:19:12.907 256+0 records in 00:19:12.907 256+0 records out 00:19:12.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147709 s, 7.1 MB/s 00:19:12.907 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:12.907 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:19:12.907 256+0 records in 00:19:12.907 256+0 records out 00:19:12.907 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144962 s, 7.2 MB/s 00:19:12.907 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:12.907 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:19:13.166 256+0 records in 00:19:13.166 256+0 records out 00:19:13.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166444 s, 6.3 MB/s 00:19:13.166 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:13.166 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:19:13.166 256+0 records in 00:19:13.166 256+0 records out 00:19:13.166 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159253 s, 6.6 MB/s 00:19:13.166 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:19:13.166 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:13.166 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:13.166 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:13.166 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:13.166 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:13.166 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:13.166 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:13.166 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:13.166 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:13.166 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:19:13.166 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:13.166 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:19:13.424 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:13.424 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:19:13.424 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:13.424 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:19:13.424 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:13.424 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:19:13.424 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:13.424 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:13.424 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:13.424 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:13.424 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:13.424 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:13.424 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:13.424 18:20:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:13.683 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:13.683 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:13.683 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:13.683 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:13.683 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:13.683 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:13.683 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:13.683 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:13.683 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:13.683 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:13.941 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:13.941 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:13.941 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:13.941 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:13.941 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:13.941 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:13.941 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:13.941 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:13.941 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:13.941 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:19:14.199 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:19:14.199 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:19:14.199 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:19:14.199 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:14.199 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:14.199 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:19:14.199 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:14.199 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:14.199 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:14.199 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:19:14.457 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:19:14.457 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:19:14.457 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:19:14.457 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:14.457 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:14.457 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:19:14.457 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:14.457 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:14.457 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:14.457 18:20:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:19:14.715 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:19:14.715 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:19:14.715 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:19:14.715 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:14.715 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:14.715 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:19:14.971 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:14.971 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:14.971 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:14.971 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:19:15.228 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:19:15.228 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:19:15.228 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:19:15.228 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:15.228 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:15.228 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:19:15.228 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:15.228 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:15.228 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:15.228 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:15.228 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:15.484 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:15.484 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:15.484 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:15.484 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:15.484 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:15.484 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:15.484 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:15.484 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:15.484 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:15.484 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:15.484 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:15.484 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:15.484 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:15.484 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:15.484 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:15.484 18:20:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:15.742 malloc_lvol_verify 00:19:15.998 18:20:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:16.256 d48a9e01-d689-4007-b7b3-dbdcdf162319 00:19:16.256 18:20:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:16.515 5987c260-2168-4de4-86ee-e21c47e3850d 00:19:16.515 18:20:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:16.773 /dev/nbd0 00:19:16.773 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:16.773 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:16.773 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:16.773 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:16.773 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:16.773 mke2fs 1.47.0 (5-Feb-2023) 00:19:16.773 Discarding device blocks: 0/4096 done 00:19:16.773 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:16.773 00:19:16.773 Allocating group tables: 0/1 done 00:19:16.773 Writing inode tables: 0/1 done 00:19:16.773 Creating journal (1024 blocks): done 00:19:16.773 Writing superblocks and filesystem accounting information: 0/1 done 00:19:16.773 00:19:16.773 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:16.773 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:16.773 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:16.773 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:16.773 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:16.773 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:16.773 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74118 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74118 ']' 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74118 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74118 00:19:17.031 killing process with pid 74118 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74118' 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74118 00:19:17.031 18:20:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74118 00:19:18.402 ************************************ 00:19:18.402 END TEST bdev_nbd 00:19:18.402 ************************************ 00:19:18.402 18:20:52 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:18.402 00:19:18.402 real 0m13.703s 00:19:18.402 user 0m19.674s 00:19:18.402 sys 0m4.397s 00:19:18.402 18:20:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:18.402 18:20:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:18.402 18:20:52 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:18.402 18:20:52 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:19:18.402 18:20:52 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:19:18.402 18:20:52 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:18.402 18:20:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:18.402 18:20:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.402 18:20:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.402 ************************************ 00:19:18.402 START TEST bdev_fio 00:19:18.402 ************************************ 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:18.402 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:19:18.402 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:18.403 ************************************ 00:19:18.403 START TEST bdev_fio_rw_verify 00:19:18.403 ************************************ 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:18.403 18:20:52 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:18.660 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:18.660 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:18.660 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:18.660 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:18.660 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:18.660 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:18.660 fio-3.35 00:19:18.660 Starting 6 threads 00:19:30.859 00:19:30.859 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74553: Tue Nov 26 18:21:03 2024 00:19:30.859 read: IOPS=28.2k, BW=110MiB/s (116MB/s)(1102MiB/10002msec) 00:19:30.859 slat (usec): min=3, max=842, avg= 7.28, stdev= 4.80 00:19:30.859 clat (usec): min=101, max=5044, avg=660.38, stdev=228.36 00:19:30.859 lat (usec): min=111, max=5067, avg=667.66, stdev=228.99 00:19:30.859 clat percentiles (usec): 00:19:30.859 | 50.000th=[ 685], 99.000th=[ 1205], 99.900th=[ 1729], 99.990th=[ 3752], 00:19:30.859 | 99.999th=[ 4424] 00:19:30.859 write: IOPS=28.5k, BW=111MiB/s (117MB/s)(1115MiB/10002msec); 0 zone resets 00:19:30.859 slat (usec): min=13, max=3111, avg=26.93, stdev=29.30 00:19:30.859 clat (usec): min=85, max=4502, avg=752.44, stdev=240.90 00:19:30.859 lat (usec): min=107, max=4590, avg=779.37, stdev=243.06 00:19:30.859 clat percentiles (usec): 00:19:30.859 | 50.000th=[ 766], 99.000th=[ 1418], 99.900th=[ 2180], 99.990th=[ 3359], 00:19:30.859 | 99.999th=[ 4359] 00:19:30.859 bw ( KiB/s): min=97981, max=143142, per=99.81%, avg=113887.26, stdev=2384.36, samples=114 00:19:30.859 iops : min=24493, max=35785, avg=28471.32, stdev=596.09, samples=114 00:19:30.859 lat (usec) : 100=0.01%, 250=2.41%, 500=16.96%, 750=35.52%, 1000=37.66% 00:19:30.859 lat (msec) : 2=7.34%, 4=0.10%, 10=0.01% 00:19:30.859 cpu : usr=59.17%, sys=27.09%, ctx=8477, majf=0, minf=24148 00:19:30.859 IO depths : 1=11.8%, 2=24.3%, 4=50.7%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:30.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.859 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.859 issued rwts: total=282170,285322,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.859 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:30.859 00:19:30.859 Run status group 0 (all jobs): 00:19:30.859 READ: bw=110MiB/s (116MB/s), 110MiB/s-110MiB/s (116MB/s-116MB/s), io=1102MiB (1156MB), run=10002-10002msec 00:19:30.859 WRITE: bw=111MiB/s (117MB/s), 111MiB/s-111MiB/s (117MB/s-117MB/s), io=1115MiB (1169MB), run=10002-10002msec 00:19:30.859 ----------------------------------------------------- 00:19:30.859 Suppressions used: 00:19:30.859 count bytes template 00:19:30.859 6 48 /usr/src/fio/parse.c 00:19:30.859 2967 284832 /usr/src/fio/iolog.c 00:19:30.859 1 8 libtcmalloc_minimal.so 00:19:30.859 1 904 libcrypto.so 00:19:30.859 ----------------------------------------------------- 00:19:30.859 00:19:31.118 00:19:31.118 real 0m12.650s 00:19:31.118 user 0m37.564s 00:19:31.118 sys 0m16.707s 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:31.118 ************************************ 00:19:31.118 END TEST bdev_fio_rw_verify 00:19:31.118 ************************************ 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:31.118 18:21:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:31.119 18:21:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c57e8fcf-ae55-4dcb-9ce3-662d60d8147a"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c57e8fcf-ae55-4dcb-9ce3-662d60d8147a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "09e9b5f1-c6ab-4fa4-95ce-87ca1b0daada"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "09e9b5f1-c6ab-4fa4-95ce-87ca1b0daada",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "a0a9cf08-a343-462d-aafc-8ff4eb9a5ba7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a0a9cf08-a343-462d-aafc-8ff4eb9a5ba7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "e779a42a-d46b-4974-b4e0-a481cca584f2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e779a42a-d46b-4974-b4e0-a481cca584f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "420a00f6-fc1d-4678-b2fc-eac072373a75"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "420a00f6-fc1d-4678-b2fc-eac072373a75",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "69a2f233-9f4d-47db-b897-f3940d90521d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "69a2f233-9f4d-47db-b897-f3940d90521d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:31.119 18:21:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:31.119 18:21:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:31.119 /home/vagrant/spdk_repo/spdk 00:19:31.119 18:21:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:31.119 18:21:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:31.119 18:21:05 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:31.119 00:19:31.119 real 0m12.867s 00:19:31.119 user 0m37.677s 00:19:31.119 sys 0m16.802s 00:19:31.119 18:21:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.119 18:21:05 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:31.119 ************************************ 00:19:31.119 END TEST bdev_fio 00:19:31.119 ************************************ 00:19:31.119 18:21:05 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:31.119 18:21:05 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:31.119 18:21:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:31.119 18:21:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.119 18:21:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:31.119 ************************************ 00:19:31.119 START TEST bdev_verify 00:19:31.119 ************************************ 00:19:31.119 18:21:05 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:31.403 [2024-11-26 18:21:05.638074] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:19:31.403 [2024-11-26 18:21:05.638260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74725 ] 00:19:31.403 [2024-11-26 18:21:05.830528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:31.661 [2024-11-26 18:21:05.987707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.661 [2024-11-26 18:21:05.987734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.226 Running I/O for 5 seconds... 00:19:34.611 21120.00 IOPS, 82.50 MiB/s [2024-11-26T18:21:10.004Z] 22272.00 IOPS, 87.00 MiB/s [2024-11-26T18:21:10.935Z] 21760.00 IOPS, 85.00 MiB/s [2024-11-26T18:21:11.867Z] 21912.00 IOPS, 85.59 MiB/s 00:19:37.406 Latency(us) 00:19:37.406 [2024-11-26T18:21:11.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:37.406 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:37.406 Verification LBA range: start 0x0 length 0x80000 00:19:37.406 nvme0n1 : 5.03 1578.65 6.17 0.00 0.00 80928.84 12094.37 74830.20 00:19:37.406 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:37.406 Verification LBA range: start 0x80000 length 0x80000 00:19:37.406 nvme0n1 : 5.02 1582.45 6.18 0.00 0.00 80733.03 12690.15 83409.45 00:19:37.406 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:37.406 Verification LBA range: start 0x0 length 0x80000 00:19:37.406 nvme0n2 : 5.03 1578.14 6.16 0.00 0.00 80790.33 21328.99 66250.94 00:19:37.406 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:37.406 Verification LBA range: start 0x80000 length 0x80000 00:19:37.407 nvme0n2 : 5.03 1576.33 6.16 0.00 0.00 80880.23 17754.30 69587.32 00:19:37.407 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:37.407 Verification LBA range: start 0x0 length 0x80000 00:19:37.407 nvme0n3 : 5.07 1591.09 6.22 0.00 0.00 79970.19 8400.52 68634.07 00:19:37.407 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:37.407 Verification LBA range: start 0x80000 length 0x80000 00:19:37.407 nvme0n3 : 5.07 1591.91 6.22 0.00 0.00 79925.47 10307.03 80549.70 00:19:37.407 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:37.407 Verification LBA range: start 0x0 length 0x20000 00:19:37.407 nvme1n1 : 5.07 1590.62 6.21 0.00 0.00 79831.59 8936.73 76260.07 00:19:37.407 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:37.407 Verification LBA range: start 0x20000 length 0x20000 00:19:37.407 nvme1n1 : 5.08 1588.30 6.20 0.00 0.00 79931.08 17396.83 76736.70 00:19:37.407 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:37.407 Verification LBA range: start 0x0 length 0xbd0bd 00:19:37.407 nvme2n1 : 5.07 2858.73 11.17 0.00 0.00 44244.19 3783.21 74353.57 00:19:37.407 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:37.407 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:19:37.407 nvme2n1 : 5.08 2783.31 10.87 0.00 0.00 45457.06 5630.14 70063.94 00:19:37.407 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:37.407 Verification LBA range: start 0x0 length 0xa0000 00:19:37.407 nvme3n1 : 5.07 1591.54 6.22 0.00 0.00 79288.01 10366.60 77213.32 00:19:37.407 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:37.407 Verification LBA range: start 0xa0000 length 0xa0000 00:19:37.407 nvme3n1 : 5.09 1583.73 6.19 0.00 0.00 79668.80 1310.72 75306.82 00:19:37.407 [2024-11-26T18:21:11.868Z] =================================================================================================================== 00:19:37.407 [2024-11-26T18:21:11.868Z] Total : 21494.81 83.96 0.00 0.00 70883.23 1310.72 83409.45 00:19:38.338 00:19:38.338 real 0m7.213s 00:19:38.338 user 0m11.329s 00:19:38.338 sys 0m1.787s 00:19:38.338 18:21:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.338 18:21:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:38.338 ************************************ 00:19:38.338 END TEST bdev_verify 00:19:38.338 ************************************ 00:19:38.338 18:21:12 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:38.338 18:21:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:38.338 18:21:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.338 18:21:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:38.338 ************************************ 00:19:38.338 START TEST bdev_verify_big_io 00:19:38.338 ************************************ 00:19:38.338 18:21:12 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:38.595 [2024-11-26 18:21:12.933116] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:19:38.595 [2024-11-26 18:21:12.933290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74827 ] 00:19:38.853 [2024-11-26 18:21:13.112422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:38.853 [2024-11-26 18:21:13.245758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.853 [2024-11-26 18:21:13.245762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.417 Running I/O for 5 seconds... 00:19:45.311 1040.00 IOPS, 65.00 MiB/s [2024-11-26T18:21:20.029Z] 3208.50 IOPS, 200.53 MiB/s 00:19:45.568 Latency(us) 00:19:45.568 [2024-11-26T18:21:20.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.568 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:45.568 Verification LBA range: start 0x0 length 0x8000 00:19:45.568 nvme0n1 : 5.81 107.45 6.72 0.00 0.00 1103343.63 58386.62 1250665.19 00:19:45.568 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:45.568 Verification LBA range: start 0x8000 length 0x8000 00:19:45.568 nvme0n1 : 5.63 144.98 9.06 0.00 0.00 847980.35 81979.58 1105771.05 00:19:45.568 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:45.568 Verification LBA range: start 0x0 length 0x8000 00:19:45.568 nvme0n2 : 5.82 136.05 8.50 0.00 0.00 873161.28 26691.03 907494.87 00:19:45.568 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:45.568 Verification LBA range: start 0x8000 length 0x8000 00:19:45.568 nvme0n2 : 5.82 153.93 9.62 0.00 0.00 779062.66 91988.71 835047.80 00:19:45.568 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:45.568 Verification LBA range: start 0x0 length 0x8000 00:19:45.568 nvme0n3 : 5.90 116.66 7.29 0.00 0.00 1002430.19 12928.47 2135282.04 00:19:45.568 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:45.568 Verification LBA range: start 0x8000 length 0x8000 00:19:45.568 nvme0n3 : 5.83 146.79 9.17 0.00 0.00 790216.12 21328.99 1624339.55 00:19:45.568 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:45.568 Verification LBA range: start 0x0 length 0x2000 00:19:45.568 nvme1n1 : 5.86 161.20 10.08 0.00 0.00 706497.01 39559.91 888429.85 00:19:45.568 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:45.568 Verification LBA range: start 0x2000 length 0x2000 00:19:45.568 nvme1n1 : 5.83 155.18 9.70 0.00 0.00 730779.12 107240.73 1692973.61 00:19:45.568 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:45.568 Verification LBA range: start 0x0 length 0xbd0b 00:19:45.568 nvme2n1 : 5.90 105.74 6.61 0.00 0.00 1042601.38 71493.82 2577590.46 00:19:45.568 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:45.568 Verification LBA range: start 0xbd0b length 0xbd0b 00:19:45.568 nvme2n1 : 5.85 230.05 14.38 0.00 0.00 486719.48 6881.28 789291.75 00:19:45.568 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:45.568 Verification LBA range: start 0x0 length 0xa000 00:19:45.568 nvme3n1 : 5.91 170.51 10.66 0.00 0.00 627092.50 2234.18 1151527.10 00:19:45.568 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:45.568 Verification LBA range: start 0xa000 length 0xa000 00:19:45.568 nvme3n1 : 5.84 182.21 11.39 0.00 0.00 592706.54 5332.25 953250.91 00:19:45.568 [2024-11-26T18:21:20.029Z] =================================================================================================================== 00:19:45.568 [2024-11-26T18:21:20.029Z] Total : 1810.75 113.17 0.00 0.00 760500.59 2234.18 2577590.46 00:19:46.942 00:19:46.942 real 0m8.358s 00:19:46.942 user 0m15.010s 00:19:46.942 sys 0m0.657s 00:19:46.942 18:21:21 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:46.942 18:21:21 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:46.942 ************************************ 00:19:46.942 END TEST bdev_verify_big_io 00:19:46.942 ************************************ 00:19:46.942 18:21:21 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:46.943 18:21:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:46.943 18:21:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:46.943 18:21:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:46.943 ************************************ 00:19:46.943 START TEST bdev_write_zeroes 00:19:46.943 ************************************ 00:19:46.943 18:21:21 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:46.943 [2024-11-26 18:21:21.320829] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:19:46.943 [2024-11-26 18:21:21.321049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74943 ] 00:19:47.200 [2024-11-26 18:21:21.509944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.200 [2024-11-26 18:21:21.638317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.766 Running I/O for 1 seconds... 00:19:48.959 67136.00 IOPS, 262.25 MiB/s 00:19:48.959 Latency(us) 00:19:48.959 [2024-11-26T18:21:23.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.959 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:48.959 nvme0n1 : 1.03 10070.32 39.34 0.00 0.00 12697.03 6255.71 25141.99 00:19:48.959 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:48.959 nvme0n2 : 1.03 10055.91 39.28 0.00 0.00 12704.54 6404.65 23354.65 00:19:48.959 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:48.959 nvme0n3 : 1.03 10040.82 39.22 0.00 0.00 12712.57 6464.23 22520.55 00:19:48.959 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:48.959 nvme1n1 : 1.03 10113.35 39.51 0.00 0.00 12610.25 6494.02 24784.52 00:19:48.959 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:48.959 nvme2n1 : 1.03 16335.07 63.81 0.00 0.00 7786.84 3023.59 23116.33 00:19:48.959 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:48.959 nvme3n1 : 1.03 10085.13 39.40 0.00 0.00 12569.60 4557.73 25618.62 00:19:48.959 [2024-11-26T18:21:23.420Z] =================================================================================================================== 00:19:48.959 [2024-11-26T18:21:23.420Z] Total : 66700.61 260.55 0.00 0.00 11468.08 3023.59 25618.62 00:19:49.893 00:19:49.893 real 0m3.059s 00:19:49.893 user 0m2.237s 00:19:49.893 sys 0m0.626s 00:19:49.894 18:21:24 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:49.894 18:21:24 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:49.894 ************************************ 00:19:49.894 END TEST bdev_write_zeroes 00:19:49.894 ************************************ 00:19:49.894 18:21:24 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:49.894 18:21:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:49.894 18:21:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:49.894 18:21:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:49.894 ************************************ 00:19:49.894 START TEST bdev_json_nonenclosed 00:19:49.894 ************************************ 00:19:49.894 18:21:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:50.152 [2024-11-26 18:21:24.430035] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:19:50.152 [2024-11-26 18:21:24.430230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75004 ] 00:19:50.410 [2024-11-26 18:21:24.616691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.410 [2024-11-26 18:21:24.739404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.410 [2024-11-26 18:21:24.739532] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:50.410 [2024-11-26 18:21:24.739560] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:50.410 [2024-11-26 18:21:24.739609] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:50.668 00:19:50.668 real 0m0.676s 00:19:50.668 user 0m0.416s 00:19:50.668 sys 0m0.155s 00:19:50.668 18:21:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.668 18:21:24 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:50.668 ************************************ 00:19:50.668 END TEST bdev_json_nonenclosed 00:19:50.668 ************************************ 00:19:50.668 18:21:25 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:50.668 18:21:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:50.668 18:21:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.668 18:21:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:50.668 ************************************ 00:19:50.668 START TEST bdev_json_nonarray 00:19:50.668 ************************************ 00:19:50.668 18:21:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:50.926 [2024-11-26 18:21:25.161046] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:19:50.926 [2024-11-26 18:21:25.161237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75024 ] 00:19:50.926 [2024-11-26 18:21:25.349687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.183 [2024-11-26 18:21:25.481788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.183 [2024-11-26 18:21:25.481924] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:51.183 [2024-11-26 18:21:25.481954] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:51.183 [2024-11-26 18:21:25.481969] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:51.441 00:19:51.441 real 0m0.698s 00:19:51.441 user 0m0.434s 00:19:51.441 sys 0m0.158s 00:19:51.441 18:21:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:51.441 18:21:25 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:51.441 ************************************ 00:19:51.441 END TEST bdev_json_nonarray 00:19:51.441 ************************************ 00:19:51.441 18:21:25 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:19:51.441 18:21:25 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:19:51.441 18:21:25 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:19:51.441 18:21:25 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:19:51.441 18:21:25 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:19:51.441 18:21:25 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:51.441 18:21:25 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:51.441 18:21:25 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:19:51.441 18:21:25 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:19:51.441 18:21:25 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:19:51.441 18:21:25 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:19:51.441 18:21:25 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:52.033 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:52.999 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:52.999 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:52.999 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:52.999 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:52.999 00:19:52.999 real 0m58.556s 00:19:52.999 user 1m41.859s 00:19:52.999 sys 0m27.983s 00:19:52.999 18:21:27 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:52.999 18:21:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:52.999 ************************************ 00:19:52.999 END TEST blockdev_xnvme 00:19:52.999 ************************************ 00:19:53.257 18:21:27 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:19:53.257 18:21:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:53.257 18:21:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.257 18:21:27 -- common/autotest_common.sh@10 -- # set +x 00:19:53.257 ************************************ 00:19:53.257 START TEST ublk 00:19:53.257 ************************************ 00:19:53.257 18:21:27 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:19:53.257 * Looking for test storage... 00:19:53.257 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:53.257 18:21:27 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:53.257 18:21:27 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:19:53.257 18:21:27 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:53.257 18:21:27 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:53.257 18:21:27 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:53.257 18:21:27 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:53.257 18:21:27 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:53.257 18:21:27 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:19:53.257 18:21:27 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:19:53.257 18:21:27 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:19:53.257 18:21:27 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:19:53.257 18:21:27 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:19:53.257 18:21:27 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:19:53.257 18:21:27 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:19:53.257 18:21:27 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:53.257 18:21:27 ublk -- scripts/common.sh@344 -- # case "$op" in 00:19:53.257 18:21:27 ublk -- scripts/common.sh@345 -- # : 1 00:19:53.257 18:21:27 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:53.257 18:21:27 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:53.257 18:21:27 ublk -- scripts/common.sh@365 -- # decimal 1 00:19:53.257 18:21:27 ublk -- scripts/common.sh@353 -- # local d=1 00:19:53.257 18:21:27 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:53.257 18:21:27 ublk -- scripts/common.sh@355 -- # echo 1 00:19:53.257 18:21:27 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:19:53.257 18:21:27 ublk -- scripts/common.sh@366 -- # decimal 2 00:19:53.257 18:21:27 ublk -- scripts/common.sh@353 -- # local d=2 00:19:53.257 18:21:27 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:53.257 18:21:27 ublk -- scripts/common.sh@355 -- # echo 2 00:19:53.257 18:21:27 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:19:53.257 18:21:27 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:53.257 18:21:27 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:53.257 18:21:27 ublk -- scripts/common.sh@368 -- # return 0 00:19:53.257 18:21:27 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:53.257 18:21:27 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:53.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.257 --rc genhtml_branch_coverage=1 00:19:53.257 --rc genhtml_function_coverage=1 00:19:53.257 --rc genhtml_legend=1 00:19:53.257 --rc geninfo_all_blocks=1 00:19:53.257 --rc geninfo_unexecuted_blocks=1 00:19:53.257 00:19:53.257 ' 00:19:53.257 18:21:27 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:53.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.257 --rc genhtml_branch_coverage=1 00:19:53.257 --rc genhtml_function_coverage=1 00:19:53.257 --rc genhtml_legend=1 00:19:53.257 --rc geninfo_all_blocks=1 00:19:53.257 --rc geninfo_unexecuted_blocks=1 00:19:53.257 00:19:53.257 ' 00:19:53.257 18:21:27 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:53.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.257 --rc genhtml_branch_coverage=1 00:19:53.257 --rc genhtml_function_coverage=1 00:19:53.257 --rc genhtml_legend=1 00:19:53.257 --rc geninfo_all_blocks=1 00:19:53.257 --rc geninfo_unexecuted_blocks=1 00:19:53.257 00:19:53.257 ' 00:19:53.257 18:21:27 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:53.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.257 --rc genhtml_branch_coverage=1 00:19:53.257 --rc genhtml_function_coverage=1 00:19:53.257 --rc genhtml_legend=1 00:19:53.257 --rc geninfo_all_blocks=1 00:19:53.257 --rc geninfo_unexecuted_blocks=1 00:19:53.257 00:19:53.257 ' 00:19:53.257 18:21:27 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:53.257 18:21:27 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:53.257 18:21:27 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:53.257 18:21:27 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:53.257 18:21:27 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:53.257 18:21:27 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:53.257 18:21:27 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:53.257 18:21:27 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:53.257 18:21:27 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:53.257 18:21:27 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:19:53.257 18:21:27 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:19:53.257 18:21:27 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:19:53.257 18:21:27 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:19:53.257 18:21:27 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:19:53.257 18:21:27 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:19:53.258 18:21:27 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:19:53.258 18:21:27 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:19:53.258 18:21:27 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:19:53.258 18:21:27 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:19:53.258 18:21:27 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:19:53.258 18:21:27 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:53.258 18:21:27 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.258 18:21:27 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:53.258 ************************************ 00:19:53.258 START TEST test_save_ublk_config 00:19:53.258 ************************************ 00:19:53.258 18:21:27 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:19:53.258 18:21:27 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:19:53.258 18:21:27 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75318 00:19:53.258 18:21:27 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:19:53.258 18:21:27 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:19:53.258 18:21:27 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75318 00:19:53.258 18:21:27 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75318 ']' 00:19:53.258 18:21:27 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.258 18:21:27 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.258 18:21:27 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.258 18:21:27 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.258 18:21:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:53.516 [2024-11-26 18:21:27.837222] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:19:53.516 [2024-11-26 18:21:27.837436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75318 ] 00:19:53.774 [2024-11-26 18:21:28.031790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.774 [2024-11-26 18:21:28.182576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.707 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.707 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:19:54.707 18:21:29 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:19:54.707 18:21:29 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:19:54.708 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.708 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:54.708 [2024-11-26 18:21:29.069643] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:54.708 [2024-11-26 18:21:29.070836] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:54.708 malloc0 00:19:54.708 [2024-11-26 18:21:29.155863] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:19:54.708 [2024-11-26 18:21:29.156054] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:19:54.708 [2024-11-26 18:21:29.156074] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:54.708 [2024-11-26 18:21:29.156085] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:54.708 [2024-11-26 18:21:29.163628] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:54.708 [2024-11-26 18:21:29.163659] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:54.965 [2024-11-26 18:21:29.171627] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:54.965 [2024-11-26 18:21:29.171753] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:54.965 [2024-11-26 18:21:29.188616] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:54.965 0 00:19:54.965 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.965 18:21:29 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:19:54.965 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.965 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:55.224 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.224 18:21:29 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:19:55.224 "subsystems": [ 00:19:55.224 { 00:19:55.224 "subsystem": "fsdev", 00:19:55.224 "config": [ 00:19:55.224 { 00:19:55.224 "method": "fsdev_set_opts", 00:19:55.224 "params": { 00:19:55.224 "fsdev_io_pool_size": 65535, 00:19:55.224 "fsdev_io_cache_size": 256 00:19:55.224 } 00:19:55.224 } 00:19:55.224 ] 00:19:55.224 }, 00:19:55.224 { 00:19:55.224 "subsystem": "keyring", 00:19:55.224 "config": [] 00:19:55.224 }, 00:19:55.224 { 00:19:55.224 "subsystem": "iobuf", 00:19:55.224 "config": [ 00:19:55.224 { 00:19:55.224 "method": "iobuf_set_options", 00:19:55.224 "params": { 00:19:55.224 "small_pool_count": 8192, 00:19:55.224 "large_pool_count": 1024, 00:19:55.224 "small_bufsize": 8192, 00:19:55.224 "large_bufsize": 135168, 00:19:55.224 "enable_numa": false 00:19:55.224 } 00:19:55.224 } 00:19:55.224 ] 00:19:55.224 }, 00:19:55.224 { 00:19:55.224 "subsystem": "sock", 00:19:55.224 "config": [ 00:19:55.224 { 00:19:55.224 "method": "sock_set_default_impl", 00:19:55.224 "params": { 00:19:55.224 "impl_name": "posix" 00:19:55.224 } 00:19:55.224 }, 00:19:55.224 { 00:19:55.224 "method": "sock_impl_set_options", 00:19:55.224 "params": { 00:19:55.224 "impl_name": "ssl", 00:19:55.224 "recv_buf_size": 4096, 00:19:55.224 "send_buf_size": 4096, 00:19:55.224 "enable_recv_pipe": true, 00:19:55.224 "enable_quickack": false, 00:19:55.224 "enable_placement_id": 0, 00:19:55.224 "enable_zerocopy_send_server": true, 00:19:55.224 "enable_zerocopy_send_client": false, 00:19:55.224 "zerocopy_threshold": 0, 00:19:55.224 "tls_version": 0, 00:19:55.224 "enable_ktls": false 00:19:55.224 } 00:19:55.224 }, 00:19:55.224 { 00:19:55.224 "method": "sock_impl_set_options", 00:19:55.224 "params": { 00:19:55.224 "impl_name": "posix", 00:19:55.224 "recv_buf_size": 2097152, 00:19:55.224 "send_buf_size": 2097152, 00:19:55.224 "enable_recv_pipe": true, 00:19:55.224 "enable_quickack": false, 00:19:55.224 "enable_placement_id": 0, 00:19:55.224 "enable_zerocopy_send_server": true, 00:19:55.224 "enable_zerocopy_send_client": false, 00:19:55.224 "zerocopy_threshold": 0, 00:19:55.224 "tls_version": 0, 00:19:55.224 "enable_ktls": false 00:19:55.224 } 00:19:55.224 } 00:19:55.224 ] 00:19:55.224 }, 00:19:55.224 { 00:19:55.224 "subsystem": "vmd", 00:19:55.224 "config": [] 00:19:55.224 }, 00:19:55.224 { 00:19:55.224 "subsystem": "accel", 00:19:55.224 "config": [ 00:19:55.224 { 00:19:55.224 "method": "accel_set_options", 00:19:55.224 "params": { 00:19:55.224 "small_cache_size": 128, 00:19:55.224 "large_cache_size": 16, 00:19:55.224 "task_count": 2048, 00:19:55.224 "sequence_count": 2048, 00:19:55.224 "buf_count": 2048 00:19:55.224 } 00:19:55.224 } 00:19:55.224 ] 00:19:55.224 }, 00:19:55.224 { 00:19:55.224 "subsystem": "bdev", 00:19:55.224 "config": [ 00:19:55.224 { 00:19:55.224 "method": "bdev_set_options", 00:19:55.224 "params": { 00:19:55.224 "bdev_io_pool_size": 65535, 00:19:55.224 "bdev_io_cache_size": 256, 00:19:55.224 "bdev_auto_examine": true, 00:19:55.224 "iobuf_small_cache_size": 128, 00:19:55.224 "iobuf_large_cache_size": 16 00:19:55.224 } 00:19:55.224 }, 00:19:55.224 { 00:19:55.224 "method": "bdev_raid_set_options", 00:19:55.224 "params": { 00:19:55.224 "process_window_size_kb": 1024, 00:19:55.224 "process_max_bandwidth_mb_sec": 0 00:19:55.224 } 00:19:55.224 }, 00:19:55.225 { 00:19:55.225 "method": "bdev_iscsi_set_options", 00:19:55.225 "params": { 00:19:55.225 "timeout_sec": 30 00:19:55.225 } 00:19:55.225 }, 00:19:55.225 { 00:19:55.225 "method": "bdev_nvme_set_options", 00:19:55.225 "params": { 00:19:55.225 "action_on_timeout": "none", 00:19:55.225 "timeout_us": 0, 00:19:55.225 "timeout_admin_us": 0, 00:19:55.225 "keep_alive_timeout_ms": 10000, 00:19:55.225 "arbitration_burst": 0, 00:19:55.225 "low_priority_weight": 0, 00:19:55.225 "medium_priority_weight": 0, 00:19:55.225 "high_priority_weight": 0, 00:19:55.225 "nvme_adminq_poll_period_us": 10000, 00:19:55.225 "nvme_ioq_poll_period_us": 0, 00:19:55.225 "io_queue_requests": 0, 00:19:55.225 "delay_cmd_submit": true, 00:19:55.225 "transport_retry_count": 4, 00:19:55.225 "bdev_retry_count": 3, 00:19:55.225 "transport_ack_timeout": 0, 00:19:55.225 "ctrlr_loss_timeout_sec": 0, 00:19:55.225 "reconnect_delay_sec": 0, 00:19:55.225 "fast_io_fail_timeout_sec": 0, 00:19:55.225 "disable_auto_failback": false, 00:19:55.225 "generate_uuids": false, 00:19:55.225 "transport_tos": 0, 00:19:55.225 "nvme_error_stat": false, 00:19:55.225 "rdma_srq_size": 0, 00:19:55.225 "io_path_stat": false, 00:19:55.225 "allow_accel_sequence": false, 00:19:55.225 "rdma_max_cq_size": 0, 00:19:55.225 "rdma_cm_event_timeout_ms": 0, 00:19:55.225 "dhchap_digests": [ 00:19:55.225 "sha256", 00:19:55.225 "sha384", 00:19:55.225 "sha512" 00:19:55.225 ], 00:19:55.225 "dhchap_dhgroups": [ 00:19:55.225 "null", 00:19:55.225 "ffdhe2048", 00:19:55.225 "ffdhe3072", 00:19:55.225 "ffdhe4096", 00:19:55.225 "ffdhe6144", 00:19:55.225 "ffdhe8192" 00:19:55.225 ] 00:19:55.225 } 00:19:55.225 }, 00:19:55.225 { 00:19:55.225 "method": "bdev_nvme_set_hotplug", 00:19:55.225 "params": { 00:19:55.225 "period_us": 100000, 00:19:55.225 "enable": false 00:19:55.225 } 00:19:55.225 }, 00:19:55.225 { 00:19:55.225 "method": "bdev_malloc_create", 00:19:55.225 "params": { 00:19:55.225 "name": "malloc0", 00:19:55.225 "num_blocks": 8192, 00:19:55.225 "block_size": 4096, 00:19:55.225 "physical_block_size": 4096, 00:19:55.225 "uuid": "84f06d58-9293-4fee-9b87-23fff1789453", 00:19:55.225 "optimal_io_boundary": 0, 00:19:55.225 "md_size": 0, 00:19:55.225 "dif_type": 0, 00:19:55.225 "dif_is_head_of_md": false, 00:19:55.225 "dif_pi_format": 0 00:19:55.225 } 00:19:55.225 }, 00:19:55.225 { 00:19:55.225 "method": "bdev_wait_for_examine" 00:19:55.225 } 00:19:55.225 ] 00:19:55.225 }, 00:19:55.225 { 00:19:55.225 "subsystem": "scsi", 00:19:55.225 "config": null 00:19:55.225 }, 00:19:55.225 { 00:19:55.225 "subsystem": "scheduler", 00:19:55.225 "config": [ 00:19:55.225 { 00:19:55.225 "method": "framework_set_scheduler", 00:19:55.225 "params": { 00:19:55.225 "name": "static" 00:19:55.225 } 00:19:55.225 } 00:19:55.225 ] 00:19:55.225 }, 00:19:55.225 { 00:19:55.225 "subsystem": "vhost_scsi", 00:19:55.225 "config": [] 00:19:55.225 }, 00:19:55.225 { 00:19:55.225 "subsystem": "vhost_blk", 00:19:55.225 "config": [] 00:19:55.225 }, 00:19:55.225 { 00:19:55.225 "subsystem": "ublk", 00:19:55.225 "config": [ 00:19:55.225 { 00:19:55.225 "method": "ublk_create_target", 00:19:55.225 "params": { 00:19:55.225 "cpumask": "1" 00:19:55.225 } 00:19:55.225 }, 00:19:55.225 { 00:19:55.225 "method": "ublk_start_disk", 00:19:55.225 "params": { 00:19:55.225 "bdev_name": "malloc0", 00:19:55.225 "ublk_id": 0, 00:19:55.225 "num_queues": 1, 00:19:55.225 "queue_depth": 128 00:19:55.225 } 00:19:55.225 } 00:19:55.225 ] 00:19:55.225 }, 00:19:55.225 { 00:19:55.225 "subsystem": "nbd", 00:19:55.225 "config": [] 00:19:55.225 }, 00:19:55.225 { 00:19:55.225 "subsystem": "nvmf", 00:19:55.225 "config": [ 00:19:55.225 { 00:19:55.225 "method": "nvmf_set_config", 00:19:55.225 "params": { 00:19:55.225 "discovery_filter": "match_any", 00:19:55.225 "admin_cmd_passthru": { 00:19:55.225 "identify_ctrlr": false 00:19:55.225 }, 00:19:55.225 "dhchap_digests": [ 00:19:55.225 "sha256", 00:19:55.225 "sha384", 00:19:55.225 "sha512" 00:19:55.225 ], 00:19:55.225 "dhchap_dhgroups": [ 00:19:55.225 "null", 00:19:55.225 "ffdhe2048", 00:19:55.225 "ffdhe3072", 00:19:55.225 "ffdhe4096", 00:19:55.225 "ffdhe6144", 00:19:55.225 "ffdhe8192" 00:19:55.225 ] 00:19:55.225 } 00:19:55.225 }, 00:19:55.225 { 00:19:55.225 "method": "nvmf_set_max_subsystems", 00:19:55.225 "params": { 00:19:55.225 "max_subsystems": 1024 00:19:55.225 } 00:19:55.225 }, 00:19:55.225 { 00:19:55.225 "method": "nvmf_set_crdt", 00:19:55.225 "params": { 00:19:55.225 "crdt1": 0, 00:19:55.225 "crdt2": 0, 00:19:55.225 "crdt3": 0 00:19:55.225 } 00:19:55.225 } 00:19:55.225 ] 00:19:55.225 }, 00:19:55.225 { 00:19:55.225 "subsystem": "iscsi", 00:19:55.225 "config": [ 00:19:55.225 { 00:19:55.225 "method": "iscsi_set_options", 00:19:55.225 "params": { 00:19:55.225 "node_base": "iqn.2016-06.io.spdk", 00:19:55.225 "max_sessions": 128, 00:19:55.225 "max_connections_per_session": 2, 00:19:55.225 "max_queue_depth": 64, 00:19:55.225 "default_time2wait": 2, 00:19:55.225 "default_time2retain": 20, 00:19:55.225 "first_burst_length": 8192, 00:19:55.225 "immediate_data": true, 00:19:55.225 "allow_duplicated_isid": false, 00:19:55.225 "error_recovery_level": 0, 00:19:55.225 "nop_timeout": 60, 00:19:55.225 "nop_in_interval": 30, 00:19:55.225 "disable_chap": false, 00:19:55.225 "require_chap": false, 00:19:55.225 "mutual_chap": false, 00:19:55.225 "chap_group": 0, 00:19:55.225 "max_large_datain_per_connection": 64, 00:19:55.225 "max_r2t_per_connection": 4, 00:19:55.225 "pdu_pool_size": 36864, 00:19:55.225 "immediate_data_pool_size": 16384, 00:19:55.225 "data_out_pool_size": 2048 00:19:55.225 } 00:19:55.225 } 00:19:55.225 ] 00:19:55.225 } 00:19:55.225 ] 00:19:55.225 }' 00:19:55.225 18:21:29 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75318 00:19:55.225 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75318 ']' 00:19:55.225 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75318 00:19:55.225 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:19:55.225 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.225 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75318 00:19:55.225 killing process with pid 75318 00:19:55.225 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:55.225 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:55.225 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75318' 00:19:55.225 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75318 00:19:55.225 18:21:29 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75318 00:19:56.598 [2024-11-26 18:21:30.821894] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:56.598 [2024-11-26 18:21:30.849751] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:56.598 [2024-11-26 18:21:30.849912] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:56.598 [2024-11-26 18:21:30.856673] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:56.598 [2024-11-26 18:21:30.856736] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:56.598 [2024-11-26 18:21:30.856759] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:56.598 [2024-11-26 18:21:30.856791] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:56.598 [2024-11-26 18:21:30.857012] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:58.501 18:21:32 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75384 00:19:58.501 18:21:32 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75384 00:19:58.501 18:21:32 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75384 ']' 00:19:58.501 18:21:32 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.501 18:21:32 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:58.501 18:21:32 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:19:58.501 18:21:32 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:19:58.501 "subsystems": [ 00:19:58.501 { 00:19:58.501 "subsystem": "fsdev", 00:19:58.501 "config": [ 00:19:58.501 { 00:19:58.501 "method": "fsdev_set_opts", 00:19:58.501 "params": { 00:19:58.501 "fsdev_io_pool_size": 65535, 00:19:58.501 "fsdev_io_cache_size": 256 00:19:58.501 } 00:19:58.501 } 00:19:58.501 ] 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "subsystem": "keyring", 00:19:58.501 "config": [] 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "subsystem": "iobuf", 00:19:58.501 "config": [ 00:19:58.501 { 00:19:58.501 "method": "iobuf_set_options", 00:19:58.501 "params": { 00:19:58.501 "small_pool_count": 8192, 00:19:58.501 "large_pool_count": 1024, 00:19:58.501 "small_bufsize": 8192, 00:19:58.501 "large_bufsize": 135168, 00:19:58.501 "enable_numa": false 00:19:58.501 } 00:19:58.501 } 00:19:58.501 ] 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "subsystem": "sock", 00:19:58.501 "config": [ 00:19:58.501 { 00:19:58.501 "method": "sock_set_default_impl", 00:19:58.501 "params": { 00:19:58.501 "impl_name": "posix" 00:19:58.501 } 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "method": "sock_impl_set_options", 00:19:58.501 "params": { 00:19:58.501 "impl_name": "ssl", 00:19:58.501 "recv_buf_size": 4096, 00:19:58.501 "send_buf_size": 4096, 00:19:58.501 "enable_recv_pipe": true, 00:19:58.501 "enable_quickack": false, 00:19:58.501 "enable_placement_id": 0, 00:19:58.501 "enable_zerocopy_send_server": true, 00:19:58.501 "enable_zerocopy_send_client": false, 00:19:58.501 "zerocopy_threshold": 0, 00:19:58.501 "tls_version": 0, 00:19:58.501 "enable_ktls": false 00:19:58.501 } 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "method": "sock_impl_set_options", 00:19:58.501 "params": { 00:19:58.501 "impl_name": "posix", 00:19:58.501 "recv_buf_size": 2097152, 00:19:58.501 "send_buf_size": 2097152, 00:19:58.501 "enable_recv_pipe": true, 00:19:58.501 "enable_quickack": false, 00:19:58.501 "enable_placement_id": 0, 00:19:58.501 "enable_zerocopy_send_server": true, 00:19:58.501 "enable_zerocopy_send_client": false, 00:19:58.501 "zerocopy_threshold": 0, 00:19:58.501 "tls_version": 0, 00:19:58.501 "enable_ktls": false 00:19:58.501 } 00:19:58.501 } 00:19:58.501 ] 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "subsystem": "vmd", 00:19:58.501 "config": [] 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "subsystem": "accel", 00:19:58.501 "config": [ 00:19:58.501 { 00:19:58.501 "method": "accel_set_options", 00:19:58.501 "params": { 00:19:58.501 "small_cache_size": 128, 00:19:58.501 "large_cache_size": 16, 00:19:58.501 "task_count": 2048, 00:19:58.501 "sequence_count": 2048, 00:19:58.501 "buf_count": 2048 00:19:58.501 } 00:19:58.501 } 00:19:58.501 ] 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "subsystem": "bdev", 00:19:58.501 "config": [ 00:19:58.501 { 00:19:58.501 "method": "bdev_set_options", 00:19:58.501 "params": { 00:19:58.501 "bdev_io_pool_size": 65535, 00:19:58.501 "bdev_io_cache_size": 256, 00:19:58.501 "bdev_auto_examine": true, 00:19:58.501 "iobuf_small_cache_size": 128, 00:19:58.501 "iobuf_large_cache_size": 16 00:19:58.501 } 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "method": "bdev_raid_set_options", 00:19:58.501 "params": { 00:19:58.501 "process_window_size_kb": 1024, 00:19:58.501 "process_max_bandwidth_mb_sec": 0 00:19:58.501 } 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "method": "bdev_iscsi_set_options", 00:19:58.501 "params": { 00:19:58.501 "timeout_sec": 30 00:19:58.501 } 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "method": "bdev_nvme_set_options", 00:19:58.501 "params": { 00:19:58.501 "action_on_timeout": "none", 00:19:58.501 "timeout_us": 0, 00:19:58.501 "timeout_admin_us": 0, 00:19:58.501 "keep_alive_timeout_ms": 10000, 00:19:58.501 "arbitration_burst": 0, 00:19:58.501 "low_priority_weight": 0, 00:19:58.501 "medium_priority_weight": 0, 00:19:58.501 "high_priority_weight": 0, 00:19:58.501 "nvme_adminq_poll_period_us": 10000, 00:19:58.501 "nvme_ioq_poll_period_us": 0, 00:19:58.501 "io_queue_requests": 0, 00:19:58.501 "delay_cmd_submit": true, 00:19:58.501 "transport_retry_count": 4, 00:19:58.501 "bdev_retry_count": 3, 00:19:58.501 "transport_ack_timeout": 0, 00:19:58.501 "ctrlr_loss_timeout_sec": 0, 00:19:58.501 "reconnect_delay_sec": 0, 00:19:58.501 "fast_io_fail_timeout_sec": 0, 00:19:58.501 "disable_auto_failback": false, 00:19:58.501 "generate_uuids": false, 00:19:58.501 "transport_tos": 0, 00:19:58.501 "nvme_error_stat": false, 00:19:58.501 "rdma_srq_size": 0, 00:19:58.501 "io_path_stat": false, 00:19:58.501 "allow_accel_sequence": false, 00:19:58.501 "rdma_max_cq_size": 0, 00:19:58.501 "rdma_cm_event_timeout_ms": 0, 00:19:58.501 "dhchap_digests": [ 00:19:58.501 "sha256", 00:19:58.501 "sha384", 00:19:58.501 "sha512" 00:19:58.501 ], 00:19:58.501 "dhchap_dhgroups": [ 00:19:58.501 "null", 00:19:58.501 "ffdhe2048", 00:19:58.501 "ffdhe3072", 00:19:58.501 "ffdhe4096", 00:19:58.501 "ffdhe6144", 00:19:58.501 "ffdhe8192" 00:19:58.501 ] 00:19:58.501 } 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "method": "bdev_nvme_set_hotplug", 00:19:58.501 "params": { 00:19:58.501 "period_us": 100000, 00:19:58.501 "enable": false 00:19:58.501 } 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "method": "bdev_malloc_create", 00:19:58.501 "params": { 00:19:58.501 "name": "malloc0", 00:19:58.501 "num_blocks": 8192, 00:19:58.501 "block_size": 4096, 00:19:58.501 "physical_block_size": 4096, 00:19:58.501 "uuid": "84f06d58-9293-4fee-9b87-23fff1789453", 00:19:58.501 "optimal_io_boundary": 0, 00:19:58.501 "md_size": 0, 00:19:58.501 "dif_type": 0, 00:19:58.501 "dif_is_head_of_md": false, 00:19:58.501 "dif_pi_format": 0 00:19:58.501 } 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "method": "bdev_wait_for_examine" 00:19:58.501 } 00:19:58.501 ] 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "subsystem": "scsi", 00:19:58.501 "config": null 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "subsystem": "scheduler", 00:19:58.501 "config": [ 00:19:58.501 { 00:19:58.501 "method": "framework_set_scheduler", 00:19:58.501 "params": { 00:19:58.501 "name": "static" 00:19:58.501 } 00:19:58.501 } 00:19:58.501 ] 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "subsystem": "vhost_scsi", 00:19:58.501 "config": [] 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "subsystem": "vhost_blk", 00:19:58.501 "config": [] 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "subsystem": "ublk", 00:19:58.501 "config": [ 00:19:58.501 { 00:19:58.501 "method": "ublk_create_target", 00:19:58.501 "params": { 00:19:58.501 "cpumask": "1" 00:19:58.501 } 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "method": "ublk_start_disk", 00:19:58.501 "params": { 00:19:58.501 "bdev_name": "malloc0", 00:19:58.501 "ublk_id": 0, 00:19:58.501 "num_queues": 1, 00:19:58.501 "queue_depth": 128 00:19:58.501 } 00:19:58.501 } 00:19:58.501 ] 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "subsystem": "nbd", 00:19:58.501 "config": [] 00:19:58.501 }, 00:19:58.501 { 00:19:58.501 "subsystem": "nvmf", 00:19:58.501 "config": [ 00:19:58.501 { 00:19:58.501 "method": "nvmf_set_config", 00:19:58.501 "params": { 00:19:58.501 "discovery_filter": "match_any", 00:19:58.501 "admin_cmd_passthru": { 00:19:58.501 "identify_ctrlr": false 00:19:58.501 }, 00:19:58.501 "dhchap_digests": [ 00:19:58.501 "sha256", 00:19:58.501 "sha384", 00:19:58.501 "sha512" 00:19:58.501 ], 00:19:58.501 "dhchap_dhgroups": [ 00:19:58.501 "null", 00:19:58.501 "ffdhe2048", 00:19:58.501 "ffdhe3072", 00:19:58.501 "ffdhe4096", 00:19:58.501 "ffdhe6144", 00:19:58.501 "ffdhe8192" 00:19:58.501 ] 00:19:58.501 } 00:19:58.501 }, 00:19:58.501 { 00:19:58.502 "method": "nvmf_set_max_subsystems", 00:19:58.502 "params": { 00:19:58.502 "max_subsystems": 1024 00:19:58.502 } 00:19:58.502 }, 00:19:58.502 { 00:19:58.502 "method": "nvmf_set_crdt", 00:19:58.502 "params": { 00:19:58.502 "crdt1": 0, 00:19:58.502 "crdt2": 0, 00:19:58.502 "crdt3": 0 00:19:58.502 } 00:19:58.502 } 00:19:58.502 ] 00:19:58.502 }, 00:19:58.502 { 00:19:58.502 "subsystem": "iscsi", 00:19:58.502 "config": [ 00:19:58.502 { 00:19:58.502 "method": "iscsi_set_options", 00:19:58.502 "params": { 00:19:58.502 "node_base": "iqn.2016-06.io.spdk", 00:19:58.502 "max_sessions": 128, 00:19:58.502 "max_connections_per_session": 2, 00:19:58.502 "max_queue_depth": 64, 00:19:58.502 "default_time2wait": 2, 00:19:58.502 "default_time2retain": 20, 00:19:58.502 "first_burst_length": 8192, 00:19:58.502 "immediate_data": true, 00:19:58.502 "allow_duplicated_isid": false, 00:19:58.502 "error_recovery_level": 0, 00:19:58.502 "nop_timeout": 60, 00:19:58.502 "nop_in_interval": 30, 00:19:58.502 "disable_chap": false, 00:19:58.502 "require_chap": false, 00:19:58.502 "mutual_chap": false, 00:19:58.502 "chap_group": 0, 00:19:58.502 "max_large_datain_per_connection": 64, 00:19:58.502 "max_r2t_per_connection": 4, 00:19:58.502 "pdu_pool_size": 36864, 00:19:58.502 "immediate_data_pool_size": 16384, 00:19:58.502 "data_out_pool_size": 2048 00:19:58.502 } 00:19:58.502 } 00:19:58.502 ] 00:19:58.502 } 00:19:58.502 ] 00:19:58.502 }' 00:19:58.502 18:21:32 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.502 18:21:32 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:58.502 18:21:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:58.502 [2024-11-26 18:21:32.836685] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:19:58.502 [2024-11-26 18:21:32.837655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75384 ] 00:19:58.759 [2024-11-26 18:21:33.031984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.759 [2024-11-26 18:21:33.168586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.158 [2024-11-26 18:21:34.279605] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:00.158 [2024-11-26 18:21:34.280781] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:00.158 [2024-11-26 18:21:34.287791] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:00.158 [2024-11-26 18:21:34.287912] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:00.158 [2024-11-26 18:21:34.287930] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:00.158 [2024-11-26 18:21:34.287940] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:00.158 [2024-11-26 18:21:34.296840] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:00.158 [2024-11-26 18:21:34.296893] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:00.158 [2024-11-26 18:21:34.303648] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:00.158 [2024-11-26 18:21:34.303793] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:00.158 [2024-11-26 18:21:34.320661] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75384 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75384 ']' 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75384 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75384 00:20:00.158 killing process with pid 75384 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75384' 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75384 00:20:00.158 18:21:34 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75384 00:20:01.533 [2024-11-26 18:21:35.915651] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:01.533 [2024-11-26 18:21:35.948781] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:01.533 [2024-11-26 18:21:35.949000] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:01.533 [2024-11-26 18:21:35.956770] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:01.533 [2024-11-26 18:21:35.956854] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:01.533 [2024-11-26 18:21:35.956877] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:01.533 [2024-11-26 18:21:35.956924] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:01.533 [2024-11-26 18:21:35.957167] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:04.065 18:21:38 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:20:04.065 ************************************ 00:20:04.065 END TEST test_save_ublk_config 00:20:04.065 ************************************ 00:20:04.065 00:20:04.065 real 0m10.472s 00:20:04.065 user 0m7.793s 00:20:04.065 sys 0m3.640s 00:20:04.065 18:21:38 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:04.065 18:21:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:04.065 18:21:38 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75479 00:20:04.065 18:21:38 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:04.065 18:21:38 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.065 18:21:38 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75479 00:20:04.065 18:21:38 ublk -- common/autotest_common.sh@835 -- # '[' -z 75479 ']' 00:20:04.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.065 18:21:38 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.065 18:21:38 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.065 18:21:38 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.065 18:21:38 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.065 18:21:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:04.065 [2024-11-26 18:21:38.334117] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:20:04.065 [2024-11-26 18:21:38.334791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75479 ] 00:20:04.065 [2024-11-26 18:21:38.518445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:04.324 [2024-11-26 18:21:38.663252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.324 [2024-11-26 18:21:38.663258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.265 18:21:39 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.265 18:21:39 ublk -- common/autotest_common.sh@868 -- # return 0 00:20:05.265 18:21:39 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:20:05.265 18:21:39 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:05.265 18:21:39 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:05.265 18:21:39 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:05.265 ************************************ 00:20:05.265 START TEST test_create_ublk 00:20:05.265 ************************************ 00:20:05.265 18:21:39 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:20:05.265 18:21:39 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:20:05.265 18:21:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.265 18:21:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:05.266 [2024-11-26 18:21:39.622601] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:05.266 [2024-11-26 18:21:39.629247] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:05.266 18:21:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.266 18:21:39 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:20:05.266 18:21:39 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:20:05.266 18:21:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.266 18:21:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:05.524 18:21:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.524 18:21:39 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:20:05.524 18:21:39 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:05.524 18:21:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.524 18:21:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:05.524 [2024-11-26 18:21:39.939848] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:05.524 [2024-11-26 18:21:39.940365] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:05.524 [2024-11-26 18:21:39.940400] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:05.524 [2024-11-26 18:21:39.940410] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:05.524 [2024-11-26 18:21:39.947809] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:05.524 [2024-11-26 18:21:39.947836] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:05.524 [2024-11-26 18:21:39.954598] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:05.524 [2024-11-26 18:21:39.955403] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:05.782 [2024-11-26 18:21:39.983610] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:05.782 18:21:39 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.782 18:21:39 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:20:05.782 18:21:39 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:20:05.782 18:21:39 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:20:05.782 18:21:39 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.782 18:21:39 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:05.782 18:21:40 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.782 18:21:40 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:20:05.782 { 00:20:05.782 "ublk_device": "/dev/ublkb0", 00:20:05.782 "id": 0, 00:20:05.782 "queue_depth": 512, 00:20:05.782 "num_queues": 4, 00:20:05.782 "bdev_name": "Malloc0" 00:20:05.782 } 00:20:05.782 ]' 00:20:05.782 18:21:40 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:20:05.782 18:21:40 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:05.782 18:21:40 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:20:05.782 18:21:40 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:20:05.782 18:21:40 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:20:05.782 18:21:40 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:20:05.782 18:21:40 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:20:05.782 18:21:40 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:20:05.782 18:21:40 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:20:06.041 18:21:40 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:06.041 18:21:40 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:20:06.041 18:21:40 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:20:06.041 18:21:40 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:20:06.041 18:21:40 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:20:06.041 18:21:40 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:20:06.041 18:21:40 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:20:06.041 18:21:40 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:20:06.041 18:21:40 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:20:06.041 18:21:40 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:20:06.041 18:21:40 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:06.041 18:21:40 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:06.041 18:21:40 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:20:06.041 fio: verification read phase will never start because write phase uses all of runtime 00:20:06.041 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:20:06.041 fio-3.35 00:20:06.041 Starting 1 process 00:20:18.243 00:20:18.243 fio_test: (groupid=0, jobs=1): err= 0: pid=75531: Tue Nov 26 18:21:50 2024 00:20:18.244 write: IOPS=10.0k, BW=39.1MiB/s (41.0MB/s)(391MiB/10001msec); 0 zone resets 00:20:18.244 clat (usec): min=59, max=11700, avg=98.53, stdev=169.02 00:20:18.244 lat (usec): min=60, max=11724, avg=99.29, stdev=169.05 00:20:18.244 clat percentiles (usec): 00:20:18.244 | 1.00th=[ 76], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 81], 00:20:18.244 | 30.00th=[ 82], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 87], 00:20:18.244 | 70.00th=[ 91], 80.00th=[ 95], 90.00th=[ 104], 95.00th=[ 114], 00:20:18.244 | 99.00th=[ 139], 99.50th=[ 182], 99.90th=[ 3228], 99.95th=[ 3621], 00:20:18.244 | 99.99th=[ 4080] 00:20:18.244 bw ( KiB/s): min=18176, max=43184, per=99.44%, avg=39825.68, stdev=5402.29, samples=19 00:20:18.244 iops : min= 4544, max=10796, avg=9956.42, stdev=1350.57, samples=19 00:20:18.244 lat (usec) : 100=87.31%, 250=12.22%, 500=0.02%, 750=0.02%, 1000=0.03% 00:20:18.244 lat (msec) : 2=0.13%, 4=0.25%, 10=0.01%, 20=0.01% 00:20:18.244 cpu : usr=2.62%, sys=6.67%, ctx=100139, majf=0, minf=797 00:20:18.244 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:18.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.244 issued rwts: total=0,100137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.244 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:18.244 00:20:18.244 Run status group 0 (all jobs): 00:20:18.244 WRITE: bw=39.1MiB/s (41.0MB/s), 39.1MiB/s-39.1MiB/s (41.0MB/s-41.0MB/s), io=391MiB (410MB), run=10001-10001msec 00:20:18.244 00:20:18.244 Disk stats (read/write): 00:20:18.244 ublkb0: ios=0/98960, merge=0/0, ticks=0/9045, in_queue=9046, util=99.08% 00:20:18.244 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.244 [2024-11-26 18:21:50.540468] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:18.244 [2024-11-26 18:21:50.595674] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:18.244 [2024-11-26 18:21:50.596626] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:18.244 [2024-11-26 18:21:50.603623] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:18.244 [2024-11-26 18:21:50.603932] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:18.244 [2024-11-26 18:21:50.603958] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.244 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.244 [2024-11-26 18:21:50.619710] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:20:18.244 request: 00:20:18.244 { 00:20:18.244 "ublk_id": 0, 00:20:18.244 "method": "ublk_stop_disk", 00:20:18.244 "req_id": 1 00:20:18.244 } 00:20:18.244 Got JSON-RPC error response 00:20:18.244 response: 00:20:18.244 { 00:20:18.244 "code": -19, 00:20:18.244 "message": "No such device" 00:20:18.244 } 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:18.244 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.244 [2024-11-26 18:21:50.635719] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:18.244 [2024-11-26 18:21:50.643634] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:18.244 [2024-11-26 18:21:50.643685] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.244 18:21:50 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.244 18:21:50 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.244 18:21:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.244 18:21:51 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:20:18.244 18:21:51 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:18.244 18:21:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.244 18:21:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.244 18:21:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.244 18:21:51 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:18.244 18:21:51 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:20:18.244 18:21:51 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:18.244 18:21:51 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:18.244 18:21:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.244 18:21:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.244 18:21:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.244 18:21:51 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:18.244 18:21:51 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:20:18.244 ************************************ 00:20:18.244 END TEST test_create_ublk 00:20:18.244 ************************************ 00:20:18.244 18:21:51 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:18.244 00:20:18.244 real 0m11.831s 00:20:18.244 user 0m0.737s 00:20:18.244 sys 0m0.773s 00:20:18.244 18:21:51 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:18.244 18:21:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.244 18:21:51 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:20:18.244 18:21:51 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:18.244 18:21:51 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:18.244 18:21:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.244 ************************************ 00:20:18.244 START TEST test_create_multi_ublk 00:20:18.244 ************************************ 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.244 [2024-11-26 18:21:51.507637] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:18.244 [2024-11-26 18:21:51.510625] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.244 [2024-11-26 18:21:51.823802] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:18.244 [2024-11-26 18:21:51.824399] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:18.244 [2024-11-26 18:21:51.824415] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:18.244 [2024-11-26 18:21:51.824430] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:18.244 [2024-11-26 18:21:51.830615] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:18.244 [2024-11-26 18:21:51.830651] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:18.244 [2024-11-26 18:21:51.839592] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:18.244 [2024-11-26 18:21:51.840445] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:18.244 [2024-11-26 18:21:51.863586] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.244 18:21:51 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.244 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.244 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:20:18.244 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.245 [2024-11-26 18:21:52.159859] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:20:18.245 [2024-11-26 18:21:52.160373] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:20:18.245 [2024-11-26 18:21:52.160391] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:18.245 [2024-11-26 18:21:52.160399] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:18.245 [2024-11-26 18:21:52.169157] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:18.245 [2024-11-26 18:21:52.169320] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:18.245 [2024-11-26 18:21:52.175630] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:18.245 [2024-11-26 18:21:52.176407] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:18.245 [2024-11-26 18:21:52.192637] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.245 [2024-11-26 18:21:52.500814] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:20:18.245 [2024-11-26 18:21:52.501415] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:20:18.245 [2024-11-26 18:21:52.501436] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:20:18.245 [2024-11-26 18:21:52.501448] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:20:18.245 [2024-11-26 18:21:52.508593] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:18.245 [2024-11-26 18:21:52.508624] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:18.245 [2024-11-26 18:21:52.515626] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:18.245 [2024-11-26 18:21:52.516444] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:20:18.245 [2024-11-26 18:21:52.532638] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.245 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.503 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.503 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:20:18.503 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:20:18.504 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.504 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.504 [2024-11-26 18:21:52.827839] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:20:18.504 [2024-11-26 18:21:52.828432] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:20:18.504 [2024-11-26 18:21:52.828451] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:20:18.504 [2024-11-26 18:21:52.828460] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:20:18.504 [2024-11-26 18:21:52.835622] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:18.504 [2024-11-26 18:21:52.835647] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:18.504 [2024-11-26 18:21:52.843655] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:18.504 [2024-11-26 18:21:52.844469] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:20:18.504 [2024-11-26 18:21:52.852682] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:20:18.504 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.504 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:20:18.504 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:20:18.504 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.504 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:18.504 18:21:52 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.504 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:20:18.504 { 00:20:18.504 "ublk_device": "/dev/ublkb0", 00:20:18.504 "id": 0, 00:20:18.504 "queue_depth": 512, 00:20:18.504 "num_queues": 4, 00:20:18.504 "bdev_name": "Malloc0" 00:20:18.504 }, 00:20:18.504 { 00:20:18.504 "ublk_device": "/dev/ublkb1", 00:20:18.504 "id": 1, 00:20:18.504 "queue_depth": 512, 00:20:18.504 "num_queues": 4, 00:20:18.504 "bdev_name": "Malloc1" 00:20:18.504 }, 00:20:18.504 { 00:20:18.504 "ublk_device": "/dev/ublkb2", 00:20:18.504 "id": 2, 00:20:18.504 "queue_depth": 512, 00:20:18.504 "num_queues": 4, 00:20:18.504 "bdev_name": "Malloc2" 00:20:18.504 }, 00:20:18.504 { 00:20:18.504 "ublk_device": "/dev/ublkb3", 00:20:18.504 "id": 3, 00:20:18.504 "queue_depth": 512, 00:20:18.504 "num_queues": 4, 00:20:18.504 "bdev_name": "Malloc3" 00:20:18.504 } 00:20:18.504 ]' 00:20:18.504 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:20:18.504 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:18.504 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:20:18.504 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:18.504 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:20:18.762 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:20:18.762 18:21:52 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:20:18.762 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:18.762 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:20:18.762 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:18.762 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:20:18.762 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:18.762 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:18.762 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:20:18.762 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:20:18.762 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:20:19.021 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:20:19.021 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:20:19.021 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:19.021 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:20:19.021 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:19.021 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:20:19.021 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:20:19.021 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:19.021 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:20:19.021 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:20:19.021 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:20:19.279 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:20:19.279 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:20:19.279 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:19.279 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:20:19.279 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:19.279 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:20:19.279 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:20:19.279 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:19.279 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:20:19.538 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:20:19.538 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:20:19.538 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:20:19.538 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:20:19.538 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:19.538 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:20:19.538 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:19.538 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:20:19.538 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:20:19.538 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:20:19.538 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:20:19.538 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:19.538 18:21:53 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:20:19.538 18:21:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.538 18:21:53 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:19.538 [2024-11-26 18:21:53.977209] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:19.797 [2024-11-26 18:21:54.027336] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:19.797 [2024-11-26 18:21:54.029177] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:19.797 [2024-11-26 18:21:54.036688] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:19.797 [2024-11-26 18:21:54.037088] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:19.797 [2024-11-26 18:21:54.037114] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:19.797 18:21:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.797 18:21:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:19.797 18:21:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:20:19.797 18:21:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.797 18:21:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:19.797 [2024-11-26 18:21:54.048799] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:19.797 [2024-11-26 18:21:54.097752] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:19.797 [2024-11-26 18:21:54.099355] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:19.797 [2024-11-26 18:21:54.099872] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:19.797 [2024-11-26 18:21:54.100205] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:19.797 [2024-11-26 18:21:54.100232] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:19.797 18:21:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.797 18:21:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:19.797 18:21:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:20:19.797 18:21:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.797 18:21:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:19.797 [2024-11-26 18:21:54.109879] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:20:19.797 [2024-11-26 18:21:54.145387] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:19.797 [2024-11-26 18:21:54.146926] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:20:19.797 [2024-11-26 18:21:54.153738] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:19.797 [2024-11-26 18:21:54.154127] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:20:19.797 [2024-11-26 18:21:54.154152] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:20:19.797 18:21:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.797 18:21:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:19.797 18:21:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:20:19.797 18:21:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.797 18:21:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:19.797 [2024-11-26 18:21:54.168853] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:20:19.797 [2024-11-26 18:21:54.213795] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:19.797 [2024-11-26 18:21:54.215120] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:20:19.797 [2024-11-26 18:21:54.224769] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:19.797 [2024-11-26 18:21:54.225147] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:20:19.797 [2024-11-26 18:21:54.225171] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:20:19.797 18:21:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.797 18:21:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:20:20.363 [2024-11-26 18:21:54.527828] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:20.363 [2024-11-26 18:21:54.535676] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:20.363 [2024-11-26 18:21:54.535741] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:20.363 18:21:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:20:20.363 18:21:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:20.363 18:21:54 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:20.363 18:21:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.363 18:21:54 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:20.928 18:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.928 18:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:20.928 18:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:20.928 18:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.928 18:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:21.187 18:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.187 18:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:21.187 18:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:20:21.187 18:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.187 18:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:21.796 18:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.796 18:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:21.796 18:21:55 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:20:21.796 18:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.796 18:21:55 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:20:22.055 ************************************ 00:20:22.055 END TEST test_create_multi_ublk 00:20:22.055 ************************************ 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:22.055 00:20:22.055 real 0m4.990s 00:20:22.055 user 0m1.393s 00:20:22.055 sys 0m0.186s 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.055 18:21:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:22.313 18:21:56 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:22.313 18:21:56 ublk -- ublk/ublk.sh@147 -- # cleanup 00:20:22.313 18:21:56 ublk -- ublk/ublk.sh@130 -- # killprocess 75479 00:20:22.313 18:21:56 ublk -- common/autotest_common.sh@954 -- # '[' -z 75479 ']' 00:20:22.313 18:21:56 ublk -- common/autotest_common.sh@958 -- # kill -0 75479 00:20:22.313 18:21:56 ublk -- common/autotest_common.sh@959 -- # uname 00:20:22.313 18:21:56 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:22.313 18:21:56 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75479 00:20:22.313 killing process with pid 75479 00:20:22.313 18:21:56 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:22.313 18:21:56 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:22.313 18:21:56 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75479' 00:20:22.313 18:21:56 ublk -- common/autotest_common.sh@973 -- # kill 75479 00:20:22.313 18:21:56 ublk -- common/autotest_common.sh@978 -- # wait 75479 00:20:23.248 [2024-11-26 18:21:57.681374] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:23.248 [2024-11-26 18:21:57.681455] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:24.624 00:20:24.624 real 0m31.471s 00:20:24.624 user 0m45.053s 00:20:24.624 sys 0m10.756s 00:20:24.624 18:21:58 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:24.624 ************************************ 00:20:24.624 18:21:58 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:24.624 END TEST ublk 00:20:24.624 ************************************ 00:20:24.624 18:21:58 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:24.624 18:21:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:24.624 18:21:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:24.624 18:21:58 -- common/autotest_common.sh@10 -- # set +x 00:20:24.624 ************************************ 00:20:24.624 START TEST ublk_recovery 00:20:24.624 ************************************ 00:20:24.624 18:21:59 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:24.624 * Looking for test storage... 00:20:24.883 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:24.883 18:21:59 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:24.883 18:21:59 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:20:24.883 18:21:59 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:24.883 18:21:59 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:24.883 18:21:59 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:20:24.883 18:21:59 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:24.883 18:21:59 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:24.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.883 --rc genhtml_branch_coverage=1 00:20:24.883 --rc genhtml_function_coverage=1 00:20:24.883 --rc genhtml_legend=1 00:20:24.883 --rc geninfo_all_blocks=1 00:20:24.883 --rc geninfo_unexecuted_blocks=1 00:20:24.883 00:20:24.883 ' 00:20:24.883 18:21:59 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:24.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.883 --rc genhtml_branch_coverage=1 00:20:24.883 --rc genhtml_function_coverage=1 00:20:24.883 --rc genhtml_legend=1 00:20:24.883 --rc geninfo_all_blocks=1 00:20:24.883 --rc geninfo_unexecuted_blocks=1 00:20:24.883 00:20:24.883 ' 00:20:24.883 18:21:59 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:24.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.883 --rc genhtml_branch_coverage=1 00:20:24.883 --rc genhtml_function_coverage=1 00:20:24.883 --rc genhtml_legend=1 00:20:24.883 --rc geninfo_all_blocks=1 00:20:24.883 --rc geninfo_unexecuted_blocks=1 00:20:24.883 00:20:24.883 ' 00:20:24.883 18:21:59 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:24.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:24.883 --rc genhtml_branch_coverage=1 00:20:24.883 --rc genhtml_function_coverage=1 00:20:24.883 --rc genhtml_legend=1 00:20:24.883 --rc geninfo_all_blocks=1 00:20:24.883 --rc geninfo_unexecuted_blocks=1 00:20:24.883 00:20:24.883 ' 00:20:24.883 18:21:59 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:24.883 18:21:59 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:24.883 18:21:59 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:24.883 18:21:59 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:24.883 18:21:59 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:24.883 18:21:59 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:24.883 18:21:59 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:24.883 18:21:59 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:24.883 18:21:59 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:24.883 18:21:59 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:20:24.883 18:21:59 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75904 00:20:24.883 18:21:59 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:24.883 18:21:59 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:24.883 18:21:59 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75904 00:20:24.883 18:21:59 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75904 ']' 00:20:24.883 18:21:59 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.883 18:21:59 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:24.883 18:21:59 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.883 18:21:59 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:24.883 18:21:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:24.883 [2024-11-26 18:21:59.328867] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:20:24.884 [2024-11-26 18:21:59.329272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75904 ] 00:20:25.143 [2024-11-26 18:21:59.516885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:25.401 [2024-11-26 18:21:59.648324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.401 [2024-11-26 18:21:59.648324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.339 18:22:00 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.339 18:22:00 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:20:26.339 18:22:00 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:20:26.339 18:22:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.339 18:22:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.339 [2024-11-26 18:22:00.576630] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:26.339 [2024-11-26 18:22:00.579633] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:26.339 18:22:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.339 18:22:00 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:26.339 18:22:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.339 18:22:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.339 malloc0 00:20:26.339 18:22:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.339 18:22:00 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:20:26.339 18:22:00 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.339 18:22:00 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:26.339 [2024-11-26 18:22:00.737805] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:20:26.339 [2024-11-26 18:22:00.738000] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:20:26.339 [2024-11-26 18:22:00.738021] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:26.339 [2024-11-26 18:22:00.738035] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:26.339 [2024-11-26 18:22:00.745656] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:26.339 [2024-11-26 18:22:00.745684] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:26.339 [2024-11-26 18:22:00.753639] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:26.339 [2024-11-26 18:22:00.753852] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:26.339 [2024-11-26 18:22:00.776626] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:26.339 1 00:20:26.339 18:22:00 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.339 18:22:00 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:20:27.716 18:22:01 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75943 00:20:27.716 18:22:01 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:20:27.716 18:22:01 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:20:27.716 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:27.716 fio-3.35 00:20:27.716 Starting 1 process 00:20:32.981 18:22:06 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75904 00:20:32.981 18:22:06 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:20:38.249 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75904 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:20:38.249 18:22:11 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76051 00:20:38.249 18:22:11 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:38.249 18:22:11 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:38.249 18:22:11 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76051 00:20:38.249 18:22:11 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76051 ']' 00:20:38.249 18:22:11 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.249 18:22:11 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.249 18:22:11 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.249 18:22:11 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.249 18:22:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.250 [2024-11-26 18:22:11.931737] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:20:38.250 [2024-11-26 18:22:11.931931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76051 ] 00:20:38.250 [2024-11-26 18:22:12.119848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:38.250 [2024-11-26 18:22:12.271783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.250 [2024-11-26 18:22:12.271804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.816 18:22:13 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.816 18:22:13 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:20:38.816 18:22:13 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:20:38.816 18:22:13 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.816 18:22:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:38.816 [2024-11-26 18:22:13.181598] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:38.816 [2024-11-26 18:22:13.184735] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:38.816 18:22:13 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.816 18:22:13 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:38.816 18:22:13 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.816 18:22:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.073 malloc0 00:20:39.073 18:22:13 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.073 18:22:13 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:20:39.073 18:22:13 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.073 18:22:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:39.073 [2024-11-26 18:22:13.341788] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:20:39.073 [2024-11-26 18:22:13.341844] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:39.073 [2024-11-26 18:22:13.341869] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:20:39.073 [2024-11-26 18:22:13.349635] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:20:39.073 [2024-11-26 18:22:13.349664] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:20:39.073 [2024-11-26 18:22:13.349678] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:20:39.073 [2024-11-26 18:22:13.349785] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:20:39.073 1 00:20:39.073 18:22:13 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.073 18:22:13 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75943 00:20:39.073 [2024-11-26 18:22:13.357600] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:20:39.073 [2024-11-26 18:22:13.365422] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:20:39.073 [2024-11-26 18:22:13.372878] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:20:39.073 [2024-11-26 18:22:13.372914] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:21:35.338 00:21:35.338 fio_test: (groupid=0, jobs=1): err= 0: pid=75947: Tue Nov 26 18:23:02 2024 00:21:35.338 read: IOPS=17.5k, BW=68.2MiB/s (71.5MB/s)(4093MiB/60002msec) 00:21:35.338 slat (nsec): min=1846, max=377548, avg=6662.83, stdev=3279.77 00:21:35.338 clat (usec): min=1032, max=6596.1k, avg=3628.14, stdev=53095.25 00:21:35.338 lat (usec): min=1041, max=6596.1k, avg=3634.80, stdev=53095.26 00:21:35.338 clat percentiles (usec): 00:21:35.338 | 1.00th=[ 2573], 5.00th=[ 2802], 10.00th=[ 2868], 20.00th=[ 2933], 00:21:35.338 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3097], 00:21:35.338 | 70.00th=[ 3163], 80.00th=[ 3261], 90.00th=[ 3621], 95.00th=[ 4146], 00:21:35.338 | 99.00th=[ 5932], 99.50th=[ 6718], 99.90th=[ 7504], 99.95th=[ 8225], 00:21:35.338 | 99.99th=[13566] 00:21:35.338 bw ( KiB/s): min=11048, max=86272, per=100.00%, avg=77661.11, stdev=9853.55, samples=107 00:21:35.338 iops : min= 2762, max=21568, avg=19415.27, stdev=2463.39, samples=107 00:21:35.338 write: IOPS=17.4k, BW=68.1MiB/s (71.4MB/s)(4088MiB/60002msec); 0 zone resets 00:21:35.338 slat (nsec): min=1869, max=358430, avg=6819.46, stdev=3350.85 00:21:35.338 clat (usec): min=899, max=6596.3k, avg=3690.16, stdev=49905.30 00:21:35.338 lat (usec): min=904, max=6596.3k, avg=3696.98, stdev=49905.30 00:21:35.338 clat percentiles (usec): 00:21:35.338 | 1.00th=[ 2671], 5.00th=[ 2933], 10.00th=[ 2999], 20.00th=[ 3032], 00:21:35.338 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3195], 60.00th=[ 3228], 00:21:35.338 | 70.00th=[ 3294], 80.00th=[ 3392], 90.00th=[ 3720], 95.00th=[ 4113], 00:21:35.338 | 99.00th=[ 5866], 99.50th=[ 6783], 99.90th=[ 7570], 99.95th=[ 8291], 00:21:35.338 | 99.99th=[13566] 00:21:35.338 bw ( KiB/s): min=10480, max=85816, per=100.00%, avg=77557.81, stdev=9861.11, samples=107 00:21:35.338 iops : min= 2620, max=21454, avg=19389.44, stdev=2465.28, samples=107 00:21:35.338 lat (usec) : 1000=0.01% 00:21:35.338 lat (msec) : 2=0.04%, 4=94.26%, 10=5.67%, 20=0.02%, >=2000=0.01% 00:21:35.338 cpu : usr=9.82%, sys=21.65%, ctx=66977, majf=0, minf=14 00:21:35.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:21:35.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:35.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:35.338 issued rwts: total=1047934,1046641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:35.338 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:35.338 00:21:35.338 Run status group 0 (all jobs): 00:21:35.338 READ: bw=68.2MiB/s (71.5MB/s), 68.2MiB/s-68.2MiB/s (71.5MB/s-71.5MB/s), io=4093MiB (4292MB), run=60002-60002msec 00:21:35.338 WRITE: bw=68.1MiB/s (71.4MB/s), 68.1MiB/s-68.1MiB/s (71.4MB/s-71.4MB/s), io=4088MiB (4287MB), run=60002-60002msec 00:21:35.338 00:21:35.338 Disk stats (read/write): 00:21:35.338 ublkb1: ios=1045529/1044297, merge=0/0, ticks=3702137/3640878, in_queue=7343015, util=99.93% 00:21:35.338 18:23:02 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:21:35.338 18:23:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.338 18:23:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.338 [2024-11-26 18:23:02.055894] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:21:35.338 [2024-11-26 18:23:02.092702] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:35.338 [2024-11-26 18:23:02.092902] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:21:35.338 [2024-11-26 18:23:02.102599] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:35.338 [2024-11-26 18:23:02.102743] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:21:35.338 [2024-11-26 18:23:02.102768] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:21:35.338 18:23:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.338 18:23:02 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:21:35.338 18:23:02 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.338 18:23:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.338 [2024-11-26 18:23:02.118693] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:35.338 [2024-11-26 18:23:02.126577] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:35.338 [2024-11-26 18:23:02.126624] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:35.338 18:23:02 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.338 18:23:02 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:21:35.338 18:23:02 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:21:35.338 18:23:02 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76051 00:21:35.338 18:23:02 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76051 ']' 00:21:35.338 18:23:02 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76051 00:21:35.338 18:23:02 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:21:35.338 18:23:02 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.338 18:23:02 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76051 00:21:35.338 18:23:02 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:35.338 18:23:02 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:35.338 killing process with pid 76051 00:21:35.338 18:23:02 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76051' 00:21:35.338 18:23:02 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76051 00:21:35.339 18:23:02 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76051 00:21:35.339 [2024-11-26 18:23:03.700214] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:35.339 [2024-11-26 18:23:03.700291] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:35.339 00:21:35.339 real 1m6.063s 00:21:35.339 user 1m49.063s 00:21:35.339 sys 0m31.008s 00:21:35.339 18:23:05 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.339 18:23:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.339 ************************************ 00:21:35.339 END TEST ublk_recovery 00:21:35.339 ************************************ 00:21:35.339 18:23:05 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:21:35.339 18:23:05 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:21:35.339 18:23:05 -- spdk/autotest.sh@260 -- # timing_exit lib 00:21:35.339 18:23:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:35.339 18:23:05 -- common/autotest_common.sh@10 -- # set +x 00:21:35.339 18:23:05 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:21:35.339 18:23:05 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:21:35.339 18:23:05 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:21:35.339 18:23:05 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:35.339 18:23:05 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:35.339 18:23:05 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:35.339 18:23:05 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:35.339 18:23:05 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:35.339 18:23:05 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:35.339 18:23:05 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:21:35.339 18:23:05 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:35.339 18:23:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:35.339 18:23:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.339 18:23:05 -- common/autotest_common.sh@10 -- # set +x 00:21:35.339 ************************************ 00:21:35.339 START TEST ftl 00:21:35.339 ************************************ 00:21:35.339 18:23:05 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:35.339 * Looking for test storage... 00:21:35.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:35.339 18:23:05 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:35.339 18:23:05 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:21:35.339 18:23:05 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:35.339 18:23:05 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:35.339 18:23:05 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:35.339 18:23:05 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:35.339 18:23:05 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:35.339 18:23:05 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.339 18:23:05 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:21:35.339 18:23:05 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:21:35.339 18:23:05 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:21:35.339 18:23:05 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:21:35.339 18:23:05 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:21:35.339 18:23:05 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:21:35.339 18:23:05 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:35.339 18:23:05 ftl -- scripts/common.sh@344 -- # case "$op" in 00:21:35.339 18:23:05 ftl -- scripts/common.sh@345 -- # : 1 00:21:35.339 18:23:05 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:35.339 18:23:05 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.339 18:23:05 ftl -- scripts/common.sh@365 -- # decimal 1 00:21:35.339 18:23:05 ftl -- scripts/common.sh@353 -- # local d=1 00:21:35.339 18:23:05 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.339 18:23:05 ftl -- scripts/common.sh@355 -- # echo 1 00:21:35.339 18:23:05 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:21:35.339 18:23:05 ftl -- scripts/common.sh@366 -- # decimal 2 00:21:35.339 18:23:05 ftl -- scripts/common.sh@353 -- # local d=2 00:21:35.339 18:23:05 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:35.339 18:23:05 ftl -- scripts/common.sh@355 -- # echo 2 00:21:35.339 18:23:05 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:21:35.339 18:23:05 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.339 18:23:05 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:35.339 18:23:05 ftl -- scripts/common.sh@368 -- # return 0 00:21:35.339 18:23:05 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:35.339 18:23:05 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:35.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.339 --rc genhtml_branch_coverage=1 00:21:35.339 --rc genhtml_function_coverage=1 00:21:35.339 --rc genhtml_legend=1 00:21:35.339 --rc geninfo_all_blocks=1 00:21:35.339 --rc geninfo_unexecuted_blocks=1 00:21:35.339 00:21:35.339 ' 00:21:35.339 18:23:05 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:35.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.339 --rc genhtml_branch_coverage=1 00:21:35.339 --rc genhtml_function_coverage=1 00:21:35.339 --rc genhtml_legend=1 00:21:35.339 --rc geninfo_all_blocks=1 00:21:35.339 --rc geninfo_unexecuted_blocks=1 00:21:35.339 00:21:35.339 ' 00:21:35.339 18:23:05 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:35.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.339 --rc genhtml_branch_coverage=1 00:21:35.339 --rc genhtml_function_coverage=1 00:21:35.339 --rc genhtml_legend=1 00:21:35.339 --rc geninfo_all_blocks=1 00:21:35.339 --rc geninfo_unexecuted_blocks=1 00:21:35.339 00:21:35.339 ' 00:21:35.339 18:23:05 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:35.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.339 --rc genhtml_branch_coverage=1 00:21:35.339 --rc genhtml_function_coverage=1 00:21:35.339 --rc genhtml_legend=1 00:21:35.339 --rc geninfo_all_blocks=1 00:21:35.339 --rc geninfo_unexecuted_blocks=1 00:21:35.339 00:21:35.339 ' 00:21:35.339 18:23:05 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:35.339 18:23:05 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:35.339 18:23:05 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:35.339 18:23:05 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:35.339 18:23:05 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:35.339 18:23:05 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:35.339 18:23:05 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:35.339 18:23:05 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:35.339 18:23:05 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:35.339 18:23:05 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:35.339 18:23:05 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:35.339 18:23:05 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:35.339 18:23:05 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:35.339 18:23:05 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:35.339 18:23:05 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:35.339 18:23:05 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:35.339 18:23:05 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:35.339 18:23:05 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:35.339 18:23:05 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:35.339 18:23:05 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:35.339 18:23:05 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:35.339 18:23:05 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:35.339 18:23:05 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:35.339 18:23:05 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:35.339 18:23:05 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:35.339 18:23:05 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:35.339 18:23:05 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:35.339 18:23:05 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:35.339 18:23:05 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:35.339 18:23:05 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:35.339 18:23:05 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:21:35.339 18:23:05 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:21:35.339 18:23:05 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:21:35.339 18:23:05 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:21:35.339 18:23:05 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:35.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:35.339 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:35.339 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:35.339 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:35.339 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:35.339 18:23:05 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76842 00:21:35.339 18:23:05 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:21:35.339 18:23:05 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76842 00:21:35.339 18:23:05 ftl -- common/autotest_common.sh@835 -- # '[' -z 76842 ']' 00:21:35.339 18:23:05 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.339 18:23:05 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.339 18:23:05 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.339 18:23:05 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.339 18:23:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:35.339 [2024-11-26 18:23:05.923128] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:21:35.339 [2024-11-26 18:23:05.923918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76842 ] 00:21:35.339 [2024-11-26 18:23:06.104155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.339 [2024-11-26 18:23:06.236999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.339 18:23:06 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.339 18:23:06 ftl -- common/autotest_common.sh@868 -- # return 0 00:21:35.340 18:23:06 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:21:35.340 18:23:07 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:35.340 18:23:08 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:21:35.340 18:23:08 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:35.340 18:23:08 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:21:35.340 18:23:08 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:35.340 18:23:08 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:35.340 18:23:09 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:21:35.340 18:23:09 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:21:35.340 18:23:09 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:21:35.340 18:23:09 ftl -- ftl/ftl.sh@50 -- # break 00:21:35.340 18:23:09 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:21:35.340 18:23:09 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:21:35.340 18:23:09 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:35.340 18:23:09 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:35.340 18:23:09 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:21:35.340 18:23:09 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:21:35.340 18:23:09 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:21:35.340 18:23:09 ftl -- ftl/ftl.sh@63 -- # break 00:21:35.340 18:23:09 ftl -- ftl/ftl.sh@66 -- # killprocess 76842 00:21:35.340 18:23:09 ftl -- common/autotest_common.sh@954 -- # '[' -z 76842 ']' 00:21:35.340 18:23:09 ftl -- common/autotest_common.sh@958 -- # kill -0 76842 00:21:35.340 18:23:09 ftl -- common/autotest_common.sh@959 -- # uname 00:21:35.340 18:23:09 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:35.340 18:23:09 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76842 00:21:35.340 18:23:09 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:35.340 18:23:09 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:35.340 killing process with pid 76842 00:21:35.340 18:23:09 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76842' 00:21:35.340 18:23:09 ftl -- common/autotest_common.sh@973 -- # kill 76842 00:21:35.340 18:23:09 ftl -- common/autotest_common.sh@978 -- # wait 76842 00:21:37.871 18:23:11 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:21:37.871 18:23:11 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:37.871 18:23:11 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:37.871 18:23:11 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:37.871 18:23:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:37.871 ************************************ 00:21:37.871 START TEST ftl_fio_basic 00:21:37.871 ************************************ 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:37.871 * Looking for test storage... 00:21:37.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:21:37.871 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:37.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.872 --rc genhtml_branch_coverage=1 00:21:37.872 --rc genhtml_function_coverage=1 00:21:37.872 --rc genhtml_legend=1 00:21:37.872 --rc geninfo_all_blocks=1 00:21:37.872 --rc geninfo_unexecuted_blocks=1 00:21:37.872 00:21:37.872 ' 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:37.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.872 --rc genhtml_branch_coverage=1 00:21:37.872 --rc genhtml_function_coverage=1 00:21:37.872 --rc genhtml_legend=1 00:21:37.872 --rc geninfo_all_blocks=1 00:21:37.872 --rc geninfo_unexecuted_blocks=1 00:21:37.872 00:21:37.872 ' 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:37.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.872 --rc genhtml_branch_coverage=1 00:21:37.872 --rc genhtml_function_coverage=1 00:21:37.872 --rc genhtml_legend=1 00:21:37.872 --rc geninfo_all_blocks=1 00:21:37.872 --rc geninfo_unexecuted_blocks=1 00:21:37.872 00:21:37.872 ' 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:37.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:37.872 --rc genhtml_branch_coverage=1 00:21:37.872 --rc genhtml_function_coverage=1 00:21:37.872 --rc genhtml_legend=1 00:21:37.872 --rc geninfo_all_blocks=1 00:21:37.872 --rc geninfo_unexecuted_blocks=1 00:21:37.872 00:21:37.872 ' 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76991 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76991 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76991 ']' 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.872 18:23:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:37.872 [2024-11-26 18:23:12.034327] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:21:37.872 [2024-11-26 18:23:12.034708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76991 ] 00:21:37.872 [2024-11-26 18:23:12.211513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:38.131 [2024-11-26 18:23:12.336631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.131 [2024-11-26 18:23:12.336714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.131 [2024-11-26 18:23:12.336741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.067 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.067 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:21:39.067 18:23:13 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:39.067 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:21:39.067 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:39.067 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:21:39.067 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:21:39.067 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:39.326 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:39.326 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:21:39.326 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:39.326 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:39.326 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:39.326 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:39.326 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:39.326 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:39.585 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:39.585 { 00:21:39.585 "name": "nvme0n1", 00:21:39.585 "aliases": [ 00:21:39.585 "ed7b7e5f-e142-456d-af13-17fdf16271b8" 00:21:39.585 ], 00:21:39.585 "product_name": "NVMe disk", 00:21:39.585 "block_size": 4096, 00:21:39.585 "num_blocks": 1310720, 00:21:39.585 "uuid": "ed7b7e5f-e142-456d-af13-17fdf16271b8", 00:21:39.585 "numa_id": -1, 00:21:39.585 "assigned_rate_limits": { 00:21:39.585 "rw_ios_per_sec": 0, 00:21:39.585 "rw_mbytes_per_sec": 0, 00:21:39.585 "r_mbytes_per_sec": 0, 00:21:39.585 "w_mbytes_per_sec": 0 00:21:39.585 }, 00:21:39.585 "claimed": false, 00:21:39.585 "zoned": false, 00:21:39.585 "supported_io_types": { 00:21:39.585 "read": true, 00:21:39.585 "write": true, 00:21:39.585 "unmap": true, 00:21:39.585 "flush": true, 00:21:39.585 "reset": true, 00:21:39.585 "nvme_admin": true, 00:21:39.585 "nvme_io": true, 00:21:39.585 "nvme_io_md": false, 00:21:39.585 "write_zeroes": true, 00:21:39.585 "zcopy": false, 00:21:39.585 "get_zone_info": false, 00:21:39.585 "zone_management": false, 00:21:39.585 "zone_append": false, 00:21:39.585 "compare": true, 00:21:39.585 "compare_and_write": false, 00:21:39.585 "abort": true, 00:21:39.585 "seek_hole": false, 00:21:39.585 "seek_data": false, 00:21:39.585 "copy": true, 00:21:39.585 "nvme_iov_md": false 00:21:39.585 }, 00:21:39.585 "driver_specific": { 00:21:39.585 "nvme": [ 00:21:39.585 { 00:21:39.585 "pci_address": "0000:00:11.0", 00:21:39.585 "trid": { 00:21:39.585 "trtype": "PCIe", 00:21:39.585 "traddr": "0000:00:11.0" 00:21:39.585 }, 00:21:39.585 "ctrlr_data": { 00:21:39.585 "cntlid": 0, 00:21:39.585 "vendor_id": "0x1b36", 00:21:39.585 "model_number": "QEMU NVMe Ctrl", 00:21:39.585 "serial_number": "12341", 00:21:39.585 "firmware_revision": "8.0.0", 00:21:39.585 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:39.585 "oacs": { 00:21:39.585 "security": 0, 00:21:39.585 "format": 1, 00:21:39.585 "firmware": 0, 00:21:39.585 "ns_manage": 1 00:21:39.585 }, 00:21:39.585 "multi_ctrlr": false, 00:21:39.585 "ana_reporting": false 00:21:39.585 }, 00:21:39.585 "vs": { 00:21:39.585 "nvme_version": "1.4" 00:21:39.585 }, 00:21:39.585 "ns_data": { 00:21:39.585 "id": 1, 00:21:39.585 "can_share": false 00:21:39.585 } 00:21:39.585 } 00:21:39.585 ], 00:21:39.585 "mp_policy": "active_passive" 00:21:39.585 } 00:21:39.585 } 00:21:39.585 ]' 00:21:39.585 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:39.585 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:39.585 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:39.585 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:39.585 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:39.585 18:23:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:21:39.585 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:21:39.585 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:39.585 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:21:39.585 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:39.585 18:23:13 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:39.844 18:23:14 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:21:39.844 18:23:14 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:40.102 18:23:14 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=4bd0aef1-9c84-463c-9003-6421619df3f4 00:21:40.102 18:23:14 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4bd0aef1-9c84-463c-9003-6421619df3f4 00:21:40.668 18:23:14 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=a787f3b6-517d-4d18-8f36-b3c5d9e56015 00:21:40.668 18:23:14 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a787f3b6-517d-4d18-8f36-b3c5d9e56015 00:21:40.668 18:23:14 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:21:40.668 18:23:14 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:40.668 18:23:14 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=a787f3b6-517d-4d18-8f36-b3c5d9e56015 00:21:40.668 18:23:14 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:21:40.668 18:23:14 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size a787f3b6-517d-4d18-8f36-b3c5d9e56015 00:21:40.668 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a787f3b6-517d-4d18-8f36-b3c5d9e56015 00:21:40.668 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:40.668 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:40.668 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:40.668 18:23:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a787f3b6-517d-4d18-8f36-b3c5d9e56015 00:21:40.668 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:40.668 { 00:21:40.668 "name": "a787f3b6-517d-4d18-8f36-b3c5d9e56015", 00:21:40.668 "aliases": [ 00:21:40.668 "lvs/nvme0n1p0" 00:21:40.668 ], 00:21:40.668 "product_name": "Logical Volume", 00:21:40.668 "block_size": 4096, 00:21:40.668 "num_blocks": 26476544, 00:21:40.668 "uuid": "a787f3b6-517d-4d18-8f36-b3c5d9e56015", 00:21:40.668 "assigned_rate_limits": { 00:21:40.668 "rw_ios_per_sec": 0, 00:21:40.668 "rw_mbytes_per_sec": 0, 00:21:40.668 "r_mbytes_per_sec": 0, 00:21:40.668 "w_mbytes_per_sec": 0 00:21:40.668 }, 00:21:40.668 "claimed": false, 00:21:40.668 "zoned": false, 00:21:40.668 "supported_io_types": { 00:21:40.668 "read": true, 00:21:40.668 "write": true, 00:21:40.668 "unmap": true, 00:21:40.668 "flush": false, 00:21:40.668 "reset": true, 00:21:40.668 "nvme_admin": false, 00:21:40.668 "nvme_io": false, 00:21:40.668 "nvme_io_md": false, 00:21:40.668 "write_zeroes": true, 00:21:40.668 "zcopy": false, 00:21:40.668 "get_zone_info": false, 00:21:40.668 "zone_management": false, 00:21:40.668 "zone_append": false, 00:21:40.668 "compare": false, 00:21:40.668 "compare_and_write": false, 00:21:40.668 "abort": false, 00:21:40.668 "seek_hole": true, 00:21:40.668 "seek_data": true, 00:21:40.668 "copy": false, 00:21:40.668 "nvme_iov_md": false 00:21:40.668 }, 00:21:40.668 "driver_specific": { 00:21:40.668 "lvol": { 00:21:40.668 "lvol_store_uuid": "4bd0aef1-9c84-463c-9003-6421619df3f4", 00:21:40.668 "base_bdev": "nvme0n1", 00:21:40.668 "thin_provision": true, 00:21:40.668 "num_allocated_clusters": 0, 00:21:40.668 "snapshot": false, 00:21:40.668 "clone": false, 00:21:40.668 "esnap_clone": false 00:21:40.668 } 00:21:40.668 } 00:21:40.668 } 00:21:40.668 ]' 00:21:40.668 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:40.936 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:40.936 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:40.936 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:40.936 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:40.936 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:40.936 18:23:15 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:21:40.936 18:23:15 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:21:40.936 18:23:15 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:41.211 18:23:15 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:41.211 18:23:15 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:41.211 18:23:15 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size a787f3b6-517d-4d18-8f36-b3c5d9e56015 00:21:41.211 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a787f3b6-517d-4d18-8f36-b3c5d9e56015 00:21:41.211 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:41.211 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:41.211 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:41.211 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a787f3b6-517d-4d18-8f36-b3c5d9e56015 00:21:41.470 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:41.470 { 00:21:41.470 "name": "a787f3b6-517d-4d18-8f36-b3c5d9e56015", 00:21:41.470 "aliases": [ 00:21:41.470 "lvs/nvme0n1p0" 00:21:41.470 ], 00:21:41.470 "product_name": "Logical Volume", 00:21:41.470 "block_size": 4096, 00:21:41.470 "num_blocks": 26476544, 00:21:41.470 "uuid": "a787f3b6-517d-4d18-8f36-b3c5d9e56015", 00:21:41.470 "assigned_rate_limits": { 00:21:41.470 "rw_ios_per_sec": 0, 00:21:41.470 "rw_mbytes_per_sec": 0, 00:21:41.470 "r_mbytes_per_sec": 0, 00:21:41.470 "w_mbytes_per_sec": 0 00:21:41.470 }, 00:21:41.470 "claimed": false, 00:21:41.470 "zoned": false, 00:21:41.470 "supported_io_types": { 00:21:41.470 "read": true, 00:21:41.470 "write": true, 00:21:41.470 "unmap": true, 00:21:41.470 "flush": false, 00:21:41.470 "reset": true, 00:21:41.470 "nvme_admin": false, 00:21:41.470 "nvme_io": false, 00:21:41.470 "nvme_io_md": false, 00:21:41.470 "write_zeroes": true, 00:21:41.470 "zcopy": false, 00:21:41.470 "get_zone_info": false, 00:21:41.470 "zone_management": false, 00:21:41.470 "zone_append": false, 00:21:41.470 "compare": false, 00:21:41.470 "compare_and_write": false, 00:21:41.470 "abort": false, 00:21:41.470 "seek_hole": true, 00:21:41.470 "seek_data": true, 00:21:41.470 "copy": false, 00:21:41.470 "nvme_iov_md": false 00:21:41.470 }, 00:21:41.470 "driver_specific": { 00:21:41.470 "lvol": { 00:21:41.470 "lvol_store_uuid": "4bd0aef1-9c84-463c-9003-6421619df3f4", 00:21:41.470 "base_bdev": "nvme0n1", 00:21:41.470 "thin_provision": true, 00:21:41.470 "num_allocated_clusters": 0, 00:21:41.470 "snapshot": false, 00:21:41.470 "clone": false, 00:21:41.470 "esnap_clone": false 00:21:41.470 } 00:21:41.470 } 00:21:41.470 } 00:21:41.470 ]' 00:21:41.470 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:41.470 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:41.470 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:41.729 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:41.729 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:41.729 18:23:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:41.729 18:23:15 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:21:41.729 18:23:15 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:41.988 18:23:16 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:21:41.988 18:23:16 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:21:41.988 18:23:16 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:21:41.989 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:21:41.989 18:23:16 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size a787f3b6-517d-4d18-8f36-b3c5d9e56015 00:21:41.989 18:23:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a787f3b6-517d-4d18-8f36-b3c5d9e56015 00:21:41.989 18:23:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:41.989 18:23:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:41.989 18:23:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:41.989 18:23:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a787f3b6-517d-4d18-8f36-b3c5d9e56015 00:21:42.248 18:23:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:42.248 { 00:21:42.248 "name": "a787f3b6-517d-4d18-8f36-b3c5d9e56015", 00:21:42.248 "aliases": [ 00:21:42.248 "lvs/nvme0n1p0" 00:21:42.248 ], 00:21:42.248 "product_name": "Logical Volume", 00:21:42.248 "block_size": 4096, 00:21:42.248 "num_blocks": 26476544, 00:21:42.248 "uuid": "a787f3b6-517d-4d18-8f36-b3c5d9e56015", 00:21:42.248 "assigned_rate_limits": { 00:21:42.248 "rw_ios_per_sec": 0, 00:21:42.248 "rw_mbytes_per_sec": 0, 00:21:42.248 "r_mbytes_per_sec": 0, 00:21:42.248 "w_mbytes_per_sec": 0 00:21:42.248 }, 00:21:42.248 "claimed": false, 00:21:42.248 "zoned": false, 00:21:42.248 "supported_io_types": { 00:21:42.248 "read": true, 00:21:42.248 "write": true, 00:21:42.248 "unmap": true, 00:21:42.248 "flush": false, 00:21:42.248 "reset": true, 00:21:42.248 "nvme_admin": false, 00:21:42.248 "nvme_io": false, 00:21:42.248 "nvme_io_md": false, 00:21:42.248 "write_zeroes": true, 00:21:42.248 "zcopy": false, 00:21:42.248 "get_zone_info": false, 00:21:42.248 "zone_management": false, 00:21:42.248 "zone_append": false, 00:21:42.248 "compare": false, 00:21:42.248 "compare_and_write": false, 00:21:42.248 "abort": false, 00:21:42.248 "seek_hole": true, 00:21:42.248 "seek_data": true, 00:21:42.248 "copy": false, 00:21:42.248 "nvme_iov_md": false 00:21:42.248 }, 00:21:42.248 "driver_specific": { 00:21:42.248 "lvol": { 00:21:42.248 "lvol_store_uuid": "4bd0aef1-9c84-463c-9003-6421619df3f4", 00:21:42.248 "base_bdev": "nvme0n1", 00:21:42.248 "thin_provision": true, 00:21:42.248 "num_allocated_clusters": 0, 00:21:42.248 "snapshot": false, 00:21:42.248 "clone": false, 00:21:42.248 "esnap_clone": false 00:21:42.248 } 00:21:42.248 } 00:21:42.248 } 00:21:42.248 ]' 00:21:42.248 18:23:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:42.248 18:23:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:42.248 18:23:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:42.248 18:23:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:42.248 18:23:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:42.248 18:23:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:42.248 18:23:16 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:21:42.248 18:23:16 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:21:42.248 18:23:16 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a787f3b6-517d-4d18-8f36-b3c5d9e56015 -c nvc0n1p0 --l2p_dram_limit 60 00:21:42.507 [2024-11-26 18:23:16.826025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.507 [2024-11-26 18:23:16.826093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:42.507 [2024-11-26 18:23:16.826121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:42.507 [2024-11-26 18:23:16.826145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.507 [2024-11-26 18:23:16.826290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.507 [2024-11-26 18:23:16.826313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:42.507 [2024-11-26 18:23:16.826333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:42.507 [2024-11-26 18:23:16.826346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.507 [2024-11-26 18:23:16.826389] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:42.507 [2024-11-26 18:23:16.827442] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:42.507 [2024-11-26 18:23:16.827492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.507 [2024-11-26 18:23:16.827508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:42.507 [2024-11-26 18:23:16.827525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:21:42.507 [2024-11-26 18:23:16.827537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.507 [2024-11-26 18:23:16.827727] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 20ce1791-87ed-4a7d-a7d5-548192b4a30c 00:21:42.507 [2024-11-26 18:23:16.829940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.507 [2024-11-26 18:23:16.830120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:42.507 [2024-11-26 18:23:16.830270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:21:42.507 [2024-11-26 18:23:16.830428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.507 [2024-11-26 18:23:16.840511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.507 [2024-11-26 18:23:16.840793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:42.507 [2024-11-26 18:23:16.840951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.847 ms 00:21:42.507 [2024-11-26 18:23:16.841119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.507 [2024-11-26 18:23:16.841437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.507 [2024-11-26 18:23:16.841616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:42.507 [2024-11-26 18:23:16.841789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:21:42.507 [2024-11-26 18:23:16.841827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.507 [2024-11-26 18:23:16.841943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.507 [2024-11-26 18:23:16.841972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:42.507 [2024-11-26 18:23:16.841989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:42.507 [2024-11-26 18:23:16.842003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.507 [2024-11-26 18:23:16.842056] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:42.507 [2024-11-26 18:23:16.847432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.507 [2024-11-26 18:23:16.847494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:42.507 [2024-11-26 18:23:16.847524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.389 ms 00:21:42.507 [2024-11-26 18:23:16.847536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.507 [2024-11-26 18:23:16.847612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.507 [2024-11-26 18:23:16.847631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:42.507 [2024-11-26 18:23:16.847666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:42.507 [2024-11-26 18:23:16.847678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.507 [2024-11-26 18:23:16.847740] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:42.507 [2024-11-26 18:23:16.847937] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:42.507 [2024-11-26 18:23:16.847979] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:42.507 [2024-11-26 18:23:16.847999] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:42.507 [2024-11-26 18:23:16.848018] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:42.507 [2024-11-26 18:23:16.848033] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:42.507 [2024-11-26 18:23:16.848048] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:42.507 [2024-11-26 18:23:16.848061] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:42.507 [2024-11-26 18:23:16.848075] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:42.507 [2024-11-26 18:23:16.848087] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:42.507 [2024-11-26 18:23:16.848110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.507 [2024-11-26 18:23:16.848123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:42.507 [2024-11-26 18:23:16.848138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:21:42.507 [2024-11-26 18:23:16.848151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.507 [2024-11-26 18:23:16.848271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.507 [2024-11-26 18:23:16.848288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:42.507 [2024-11-26 18:23:16.848304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:42.507 [2024-11-26 18:23:16.848316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.508 [2024-11-26 18:23:16.848449] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:42.508 [2024-11-26 18:23:16.848469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:42.508 [2024-11-26 18:23:16.848485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:42.508 [2024-11-26 18:23:16.848497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.508 [2024-11-26 18:23:16.848512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:42.508 [2024-11-26 18:23:16.848523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:42.508 [2024-11-26 18:23:16.848537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:42.508 [2024-11-26 18:23:16.848548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:42.508 [2024-11-26 18:23:16.848807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:42.508 [2024-11-26 18:23:16.848857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:42.508 [2024-11-26 18:23:16.849075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:42.508 [2024-11-26 18:23:16.849140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:42.508 [2024-11-26 18:23:16.849327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:42.508 [2024-11-26 18:23:16.849389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:42.508 [2024-11-26 18:23:16.849533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:42.508 [2024-11-26 18:23:16.849614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.508 [2024-11-26 18:23:16.849722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:42.508 [2024-11-26 18:23:16.849855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:42.508 [2024-11-26 18:23:16.849997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.508 [2024-11-26 18:23:16.850157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:42.508 [2024-11-26 18:23:16.850226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:42.508 [2024-11-26 18:23:16.850386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:42.508 [2024-11-26 18:23:16.850417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:42.508 [2024-11-26 18:23:16.850430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:42.508 [2024-11-26 18:23:16.850456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:42.508 [2024-11-26 18:23:16.850470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:42.508 [2024-11-26 18:23:16.850485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:42.508 [2024-11-26 18:23:16.850505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:42.508 [2024-11-26 18:23:16.850518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:42.508 [2024-11-26 18:23:16.850547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:42.508 [2024-11-26 18:23:16.850578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:42.508 [2024-11-26 18:23:16.850590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:42.508 [2024-11-26 18:23:16.850607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:42.508 [2024-11-26 18:23:16.850637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:42.508 [2024-11-26 18:23:16.850652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:42.508 [2024-11-26 18:23:16.850664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:42.508 [2024-11-26 18:23:16.850677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:42.508 [2024-11-26 18:23:16.850689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:42.508 [2024-11-26 18:23:16.850703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:42.508 [2024-11-26 18:23:16.850714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.508 [2024-11-26 18:23:16.850729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:42.508 [2024-11-26 18:23:16.850741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:42.508 [2024-11-26 18:23:16.850754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.508 [2024-11-26 18:23:16.850765] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:42.508 [2024-11-26 18:23:16.850779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:42.508 [2024-11-26 18:23:16.850791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:42.508 [2024-11-26 18:23:16.850805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:42.508 [2024-11-26 18:23:16.850817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:42.508 [2024-11-26 18:23:16.850840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:42.508 [2024-11-26 18:23:16.850860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:42.508 [2024-11-26 18:23:16.850886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:42.508 [2024-11-26 18:23:16.850906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:42.508 [2024-11-26 18:23:16.850921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:42.508 [2024-11-26 18:23:16.850956] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:42.508 [2024-11-26 18:23:16.850990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:42.508 [2024-11-26 18:23:16.851005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:42.508 [2024-11-26 18:23:16.851029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:42.508 [2024-11-26 18:23:16.851041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:42.508 [2024-11-26 18:23:16.851056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:42.508 [2024-11-26 18:23:16.851074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:42.508 [2024-11-26 18:23:16.851100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:42.508 [2024-11-26 18:23:16.851122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:42.508 [2024-11-26 18:23:16.851139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:42.508 [2024-11-26 18:23:16.851152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:42.508 [2024-11-26 18:23:16.851171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:42.508 [2024-11-26 18:23:16.851184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:42.508 [2024-11-26 18:23:16.851198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:42.508 [2024-11-26 18:23:16.851209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:42.508 [2024-11-26 18:23:16.851224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:42.508 [2024-11-26 18:23:16.851235] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:42.508 [2024-11-26 18:23:16.851261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:42.508 [2024-11-26 18:23:16.851275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:42.508 [2024-11-26 18:23:16.851290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:42.508 [2024-11-26 18:23:16.851302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:42.508 [2024-11-26 18:23:16.851316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:42.508 [2024-11-26 18:23:16.851330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.508 [2024-11-26 18:23:16.851353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:42.508 [2024-11-26 18:23:16.851367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.955 ms 00:21:42.508 [2024-11-26 18:23:16.851381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.508 [2024-11-26 18:23:16.851497] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:42.508 [2024-11-26 18:23:16.851524] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:46.692 [2024-11-26 18:23:20.502607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.692 [2024-11-26 18:23:20.503596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:46.692 [2024-11-26 18:23:20.503736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3651.136 ms 00:21:46.692 [2024-11-26 18:23:20.503826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.692 [2024-11-26 18:23:20.546256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.692 [2024-11-26 18:23:20.546842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:46.692 [2024-11-26 18:23:20.546989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.039 ms 00:21:46.692 [2024-11-26 18:23:20.547095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.692 [2024-11-26 18:23:20.547411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.692 [2024-11-26 18:23:20.547675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:46.692 [2024-11-26 18:23:20.547836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:46.692 [2024-11-26 18:23:20.548011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.692 [2024-11-26 18:23:20.605729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.692 [2024-11-26 18:23:20.606233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:46.692 [2024-11-26 18:23:20.606373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.550 ms 00:21:46.692 [2024-11-26 18:23:20.606403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.692 [2024-11-26 18:23:20.606504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.692 [2024-11-26 18:23:20.606528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:46.692 [2024-11-26 18:23:20.606543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:46.692 [2024-11-26 18:23:20.606584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.692 [2024-11-26 18:23:20.607348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.692 [2024-11-26 18:23:20.607373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:46.692 [2024-11-26 18:23:20.607391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:21:46.692 [2024-11-26 18:23:20.607406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.692 [2024-11-26 18:23:20.607605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.692 [2024-11-26 18:23:20.607630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:46.692 [2024-11-26 18:23:20.607644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:21:46.692 [2024-11-26 18:23:20.607662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.692 [2024-11-26 18:23:20.631480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.692 [2024-11-26 18:23:20.631881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:46.692 [2024-11-26 18:23:20.632014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.762 ms 00:21:46.692 [2024-11-26 18:23:20.632101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.692 [2024-11-26 18:23:20.647140] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:46.692 [2024-11-26 18:23:20.670786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.692 [2024-11-26 18:23:20.671083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:46.692 [2024-11-26 18:23:20.671191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.449 ms 00:21:46.692 [2024-11-26 18:23:20.671274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.692 [2024-11-26 18:23:20.750045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.692 [2024-11-26 18:23:20.750287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:46.692 [2024-11-26 18:23:20.750435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.407 ms 00:21:46.692 [2024-11-26 18:23:20.750579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.692 [2024-11-26 18:23:20.750990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.692 [2024-11-26 18:23:20.751094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:46.693 [2024-11-26 18:23:20.751185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:21:46.693 [2024-11-26 18:23:20.751267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.693 [2024-11-26 18:23:20.783650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.693 [2024-11-26 18:23:20.784110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:46.693 [2024-11-26 18:23:20.784252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.234 ms 00:21:46.693 [2024-11-26 18:23:20.784349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.693 [2024-11-26 18:23:20.815186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.693 [2024-11-26 18:23:20.815513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:46.693 [2024-11-26 18:23:20.815687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.701 ms 00:21:46.693 [2024-11-26 18:23:20.815784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.693 [2024-11-26 18:23:20.816747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.693 [2024-11-26 18:23:20.816978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:46.693 [2024-11-26 18:23:20.817141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:21:46.693 [2024-11-26 18:23:20.817376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.693 [2024-11-26 18:23:20.918211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.693 [2024-11-26 18:23:20.918830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:46.693 [2024-11-26 18:23:20.919108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.468 ms 00:21:46.693 [2024-11-26 18:23:20.919343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.693 [2024-11-26 18:23:20.953201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.693 [2024-11-26 18:23:20.953684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:46.693 [2024-11-26 18:23:20.953844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.426 ms 00:21:46.693 [2024-11-26 18:23:20.953948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.693 [2024-11-26 18:23:20.985845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.693 [2024-11-26 18:23:20.986198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:46.693 [2024-11-26 18:23:20.986337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.738 ms 00:21:46.693 [2024-11-26 18:23:20.986426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.693 [2024-11-26 18:23:21.017347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.693 [2024-11-26 18:23:21.017702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:46.693 [2024-11-26 18:23:21.017972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.728 ms 00:21:46.693 [2024-11-26 18:23:21.018238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.693 [2024-11-26 18:23:21.018537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.693 [2024-11-26 18:23:21.018789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:46.693 [2024-11-26 18:23:21.019049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:46.693 [2024-11-26 18:23:21.019262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.693 [2024-11-26 18:23:21.019682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.693 [2024-11-26 18:23:21.019908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:46.693 [2024-11-26 18:23:21.020004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:46.693 [2024-11-26 18:23:21.020204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.693 [2024-11-26 18:23:21.022044] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4195.270 ms, result 0 00:21:46.693 { 00:21:46.693 "name": "ftl0", 00:21:46.693 "uuid": "20ce1791-87ed-4a7d-a7d5-548192b4a30c" 00:21:46.693 } 00:21:46.693 18:23:21 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:21:46.693 18:23:21 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:46.693 18:23:21 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:46.693 18:23:21 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:21:46.693 18:23:21 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:46.693 18:23:21 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:46.693 18:23:21 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:46.951 18:23:21 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:47.209 [ 00:21:47.209 { 00:21:47.209 "name": "ftl0", 00:21:47.209 "aliases": [ 00:21:47.209 "20ce1791-87ed-4a7d-a7d5-548192b4a30c" 00:21:47.209 ], 00:21:47.209 "product_name": "FTL disk", 00:21:47.209 "block_size": 4096, 00:21:47.209 "num_blocks": 20971520, 00:21:47.209 "uuid": "20ce1791-87ed-4a7d-a7d5-548192b4a30c", 00:21:47.209 "assigned_rate_limits": { 00:21:47.209 "rw_ios_per_sec": 0, 00:21:47.209 "rw_mbytes_per_sec": 0, 00:21:47.209 "r_mbytes_per_sec": 0, 00:21:47.209 "w_mbytes_per_sec": 0 00:21:47.209 }, 00:21:47.209 "claimed": false, 00:21:47.209 "zoned": false, 00:21:47.209 "supported_io_types": { 00:21:47.209 "read": true, 00:21:47.209 "write": true, 00:21:47.209 "unmap": true, 00:21:47.209 "flush": true, 00:21:47.209 "reset": false, 00:21:47.209 "nvme_admin": false, 00:21:47.209 "nvme_io": false, 00:21:47.209 "nvme_io_md": false, 00:21:47.209 "write_zeroes": true, 00:21:47.209 "zcopy": false, 00:21:47.209 "get_zone_info": false, 00:21:47.209 "zone_management": false, 00:21:47.209 "zone_append": false, 00:21:47.209 "compare": false, 00:21:47.209 "compare_and_write": false, 00:21:47.209 "abort": false, 00:21:47.209 "seek_hole": false, 00:21:47.209 "seek_data": false, 00:21:47.209 "copy": false, 00:21:47.209 "nvme_iov_md": false 00:21:47.209 }, 00:21:47.209 "driver_specific": { 00:21:47.209 "ftl": { 00:21:47.209 "base_bdev": "a787f3b6-517d-4d18-8f36-b3c5d9e56015", 00:21:47.209 "cache": "nvc0n1p0" 00:21:47.209 } 00:21:47.209 } 00:21:47.209 } 00:21:47.209 ] 00:21:47.209 18:23:21 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:21:47.209 18:23:21 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:21:47.209 18:23:21 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:47.468 18:23:21 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:21:47.468 18:23:21 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:47.732 [2024-11-26 18:23:22.157422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.732 [2024-11-26 18:23:22.158071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:47.732 [2024-11-26 18:23:22.158114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:47.732 [2024-11-26 18:23:22.158139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.732 [2024-11-26 18:23:22.158201] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:47.732 [2024-11-26 18:23:22.161944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.732 [2024-11-26 18:23:22.161985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:47.732 [2024-11-26 18:23:22.162007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.710 ms 00:21:47.732 [2024-11-26 18:23:22.162019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.732 [2024-11-26 18:23:22.162588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.732 [2024-11-26 18:23:22.162616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:47.732 [2024-11-26 18:23:22.162634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:21:47.732 [2024-11-26 18:23:22.162647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.732 [2024-11-26 18:23:22.165913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.732 [2024-11-26 18:23:22.165953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:47.732 [2024-11-26 18:23:22.165974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.226 ms 00:21:47.732 [2024-11-26 18:23:22.165987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.732 [2024-11-26 18:23:22.172858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.732 [2024-11-26 18:23:22.172894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:47.732 [2024-11-26 18:23:22.172929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.831 ms 00:21:47.732 [2024-11-26 18:23:22.172941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.999 [2024-11-26 18:23:22.205183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.999 [2024-11-26 18:23:22.205243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:47.999 [2024-11-26 18:23:22.205289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.115 ms 00:21:47.999 [2024-11-26 18:23:22.205303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.999 [2024-11-26 18:23:22.225062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.999 [2024-11-26 18:23:22.225288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:47.999 [2024-11-26 18:23:22.225334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.694 ms 00:21:48.000 [2024-11-26 18:23:22.225348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.000 [2024-11-26 18:23:22.225686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.000 [2024-11-26 18:23:22.225713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:48.000 [2024-11-26 18:23:22.225732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:21:48.000 [2024-11-26 18:23:22.225745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.000 [2024-11-26 18:23:22.257008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.000 [2024-11-26 18:23:22.257085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:48.000 [2024-11-26 18:23:22.257127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.203 ms 00:21:48.000 [2024-11-26 18:23:22.257140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.000 [2024-11-26 18:23:22.287755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.000 [2024-11-26 18:23:22.287808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:48.000 [2024-11-26 18:23:22.287832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.550 ms 00:21:48.000 [2024-11-26 18:23:22.287844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.000 [2024-11-26 18:23:22.319067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.000 [2024-11-26 18:23:22.319457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:48.000 [2024-11-26 18:23:22.319501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.145 ms 00:21:48.000 [2024-11-26 18:23:22.319516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.000 [2024-11-26 18:23:22.350089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.000 [2024-11-26 18:23:22.350386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:48.000 [2024-11-26 18:23:22.350429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.338 ms 00:21:48.000 [2024-11-26 18:23:22.350443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.000 [2024-11-26 18:23:22.350529] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:48.000 [2024-11-26 18:23:22.350586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.350989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:48.000 [2024-11-26 18:23:22.351622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.351990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.352008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.352024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.352037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.352051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.352064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.352082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:48.001 [2024-11-26 18:23:22.352103] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:48.001 [2024-11-26 18:23:22.352119] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 20ce1791-87ed-4a7d-a7d5-548192b4a30c 00:21:48.001 [2024-11-26 18:23:22.352132] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:48.001 [2024-11-26 18:23:22.352168] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:48.001 [2024-11-26 18:23:22.352185] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:48.001 [2024-11-26 18:23:22.352201] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:48.001 [2024-11-26 18:23:22.352212] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:48.001 [2024-11-26 18:23:22.352227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:48.001 [2024-11-26 18:23:22.352239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:48.001 [2024-11-26 18:23:22.352252] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:48.001 [2024-11-26 18:23:22.352262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:48.001 [2024-11-26 18:23:22.352277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.001 [2024-11-26 18:23:22.352290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:48.001 [2024-11-26 18:23:22.352306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.753 ms 00:21:48.001 [2024-11-26 18:23:22.352318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.001 [2024-11-26 18:23:22.369544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.001 [2024-11-26 18:23:22.369622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:48.001 [2024-11-26 18:23:22.369646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.135 ms 00:21:48.001 [2024-11-26 18:23:22.369658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.001 [2024-11-26 18:23:22.370173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.001 [2024-11-26 18:23:22.370210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:48.001 [2024-11-26 18:23:22.370230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:21:48.001 [2024-11-26 18:23:22.370247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.001 [2024-11-26 18:23:22.430542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.001 [2024-11-26 18:23:22.430642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:48.001 [2024-11-26 18:23:22.430670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.001 [2024-11-26 18:23:22.430683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.001 [2024-11-26 18:23:22.430783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.001 [2024-11-26 18:23:22.430800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:48.001 [2024-11-26 18:23:22.430816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.001 [2024-11-26 18:23:22.430828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.001 [2024-11-26 18:23:22.431014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.001 [2024-11-26 18:23:22.431045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:48.001 [2024-11-26 18:23:22.431076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.001 [2024-11-26 18:23:22.431095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.001 [2024-11-26 18:23:22.431140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.001 [2024-11-26 18:23:22.431162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:48.001 [2024-11-26 18:23:22.431178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.001 [2024-11-26 18:23:22.431190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.259 [2024-11-26 18:23:22.547233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.259 [2024-11-26 18:23:22.547608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:48.259 [2024-11-26 18:23:22.547656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.259 [2024-11-26 18:23:22.547672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.259 [2024-11-26 18:23:22.634400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.259 [2024-11-26 18:23:22.634838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:48.259 [2024-11-26 18:23:22.634899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.259 [2024-11-26 18:23:22.634916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.259 [2024-11-26 18:23:22.635087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.259 [2024-11-26 18:23:22.635108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:48.259 [2024-11-26 18:23:22.635131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.259 [2024-11-26 18:23:22.635144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.259 [2024-11-26 18:23:22.635244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.259 [2024-11-26 18:23:22.635262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:48.259 [2024-11-26 18:23:22.635278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.259 [2024-11-26 18:23:22.635291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.259 [2024-11-26 18:23:22.635452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.259 [2024-11-26 18:23:22.635473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:48.259 [2024-11-26 18:23:22.635501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.259 [2024-11-26 18:23:22.635513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.260 [2024-11-26 18:23:22.635616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.260 [2024-11-26 18:23:22.635638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:48.260 [2024-11-26 18:23:22.635655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.260 [2024-11-26 18:23:22.635667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.260 [2024-11-26 18:23:22.635734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.260 [2024-11-26 18:23:22.635750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:48.260 [2024-11-26 18:23:22.635765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.260 [2024-11-26 18:23:22.635780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.260 [2024-11-26 18:23:22.635854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:48.260 [2024-11-26 18:23:22.635871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:48.260 [2024-11-26 18:23:22.635887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:48.260 [2024-11-26 18:23:22.635899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.260 [2024-11-26 18:23:22.636108] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 478.671 ms, result 0 00:21:48.260 true 00:21:48.260 18:23:22 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76991 00:21:48.260 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76991 ']' 00:21:48.260 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76991 00:21:48.260 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:21:48.260 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.260 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76991 00:21:48.260 killing process with pid 76991 00:21:48.260 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:48.260 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:48.260 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76991' 00:21:48.260 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76991 00:21:48.260 18:23:22 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76991 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:53.549 18:23:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:53.549 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:21:53.549 fio-3.35 00:21:53.549 Starting 1 thread 00:22:00.187 00:22:00.187 test: (groupid=0, jobs=1): err= 0: pid=77213: Tue Nov 26 18:23:33 2024 00:22:00.187 read: IOPS=842, BW=56.0MiB/s (58.7MB/s)(255MiB/4547msec) 00:22:00.187 slat (nsec): min=5808, max=45427, avg=8064.87, stdev=3941.21 00:22:00.187 clat (usec): min=345, max=3663, avg=531.06, stdev=73.63 00:22:00.187 lat (usec): min=352, max=3670, avg=539.12, stdev=74.13 00:22:00.187 clat percentiles (usec): 00:22:00.187 | 1.00th=[ 416], 5.00th=[ 449], 10.00th=[ 465], 20.00th=[ 482], 00:22:00.187 | 30.00th=[ 502], 40.00th=[ 519], 50.00th=[ 529], 60.00th=[ 537], 00:22:00.187 | 70.00th=[ 553], 80.00th=[ 570], 90.00th=[ 603], 95.00th=[ 627], 00:22:00.187 | 99.00th=[ 668], 99.50th=[ 685], 99.90th=[ 725], 99.95th=[ 758], 00:22:00.187 | 99.99th=[ 3654] 00:22:00.187 write: IOPS=848, BW=56.4MiB/s (59.1MB/s)(256MiB/4542msec); 0 zone resets 00:22:00.187 slat (usec): min=18, max=108, avg=29.02, stdev= 8.38 00:22:00.187 clat (usec): min=405, max=1176, avg=596.16, stdev=69.24 00:22:00.187 lat (usec): min=433, max=1221, avg=625.18, stdev=69.91 00:22:00.187 clat percentiles (usec): 00:22:00.187 | 1.00th=[ 474], 5.00th=[ 510], 10.00th=[ 529], 20.00th=[ 545], 00:22:00.187 | 30.00th=[ 562], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 603], 00:22:00.187 | 70.00th=[ 619], 80.00th=[ 644], 90.00th=[ 668], 95.00th=[ 693], 00:22:00.187 | 99.00th=[ 922], 99.50th=[ 955], 99.90th=[ 1020], 99.95th=[ 1037], 00:22:00.187 | 99.99th=[ 1172] 00:22:00.187 bw ( KiB/s): min=54944, max=59160, per=100.00%, avg=57875.56, stdev=1341.36, samples=9 00:22:00.187 iops : min= 808, max= 870, avg=851.11, stdev=19.73, samples=9 00:22:00.187 lat (usec) : 500=16.00%, 750=82.95%, 1000=0.99% 00:22:00.187 lat (msec) : 2=0.05%, 4=0.01% 00:22:00.187 cpu : usr=99.10%, sys=0.13%, ctx=7, majf=0, minf=1169 00:22:00.187 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:00.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.187 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:00.187 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:00.187 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:00.187 00:22:00.187 Run status group 0 (all jobs): 00:22:00.187 READ: bw=56.0MiB/s (58.7MB/s), 56.0MiB/s-56.0MiB/s (58.7MB/s-58.7MB/s), io=255MiB (267MB), run=4547-4547msec 00:22:00.187 WRITE: bw=56.4MiB/s (59.1MB/s), 56.4MiB/s-56.4MiB/s (59.1MB/s-59.1MB/s), io=256MiB (269MB), run=4542-4542msec 00:22:00.754 ----------------------------------------------------- 00:22:00.754 Suppressions used: 00:22:00.754 count bytes template 00:22:00.754 1 5 /usr/src/fio/parse.c 00:22:00.754 1 8 libtcmalloc_minimal.so 00:22:00.754 1 904 libcrypto.so 00:22:00.754 ----------------------------------------------------- 00:22:00.754 00:22:00.754 18:23:35 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:22:00.754 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:00.754 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:01.013 18:23:35 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:01.272 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:01.272 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:01.272 fio-3.35 00:22:01.272 Starting 2 threads 00:22:39.998 00:22:39.998 first_half: (groupid=0, jobs=1): err= 0: pid=77323: Tue Nov 26 18:24:10 2024 00:22:39.998 read: IOPS=1933, BW=7734KiB/s (7919kB/s)(255MiB/33717msec) 00:22:39.998 slat (nsec): min=4724, max=73312, avg=8319.53, stdev=3245.84 00:22:39.998 clat (usec): min=884, max=442597, avg=44608.23, stdev=23319.31 00:22:39.998 lat (usec): min=891, max=442605, avg=44616.55, stdev=23319.63 00:22:39.998 clat percentiles (msec): 00:22:39.998 | 1.00th=[ 4], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 41], 00:22:39.998 | 30.00th=[ 41], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:22:39.998 | 70.00th=[ 44], 80.00th=[ 46], 90.00th=[ 49], 95.00th=[ 53], 00:22:39.998 | 99.00th=[ 178], 99.50th=[ 230], 99.90th=[ 305], 99.95th=[ 376], 00:22:39.998 | 99.99th=[ 430] 00:22:39.998 write: IOPS=2722, BW=10.6MiB/s (11.2MB/s)(256MiB/24070msec); 0 zone resets 00:22:39.998 slat (usec): min=5, max=417, avg=11.65, stdev= 7.45 00:22:39.998 clat (usec): min=565, max=132854, avg=21431.39, stdev=35856.19 00:22:39.998 lat (usec): min=583, max=132865, avg=21443.04, stdev=35856.37 00:22:39.998 clat percentiles (usec): 00:22:39.998 | 1.00th=[ 1057], 5.00th=[ 1303], 10.00th=[ 1467], 20.00th=[ 1696], 00:22:39.998 | 30.00th=[ 1926], 40.00th=[ 2245], 50.00th=[ 3523], 60.00th=[ 7242], 00:22:39.998 | 70.00th=[ 12649], 80.00th=[ 18482], 90.00th=[ 99091], 95.00th=[107480], 00:22:39.998 | 99.00th=[116917], 99.50th=[121111], 99.90th=[127402], 99.95th=[128451], 00:22:39.998 | 99.99th=[131597] 00:22:39.998 bw ( KiB/s): min= 72, max=41728, per=72.94%, avg=15887.52, stdev=10040.20, samples=33 00:22:39.998 iops : min= 18, max=10432, avg=3971.88, stdev=2510.05, samples=33 00:22:39.998 lat (usec) : 750=0.02%, 1000=0.31% 00:22:39.998 lat (msec) : 2=16.43%, 4=9.18%, 10=8.47%, 20=7.76%, 50=46.08% 00:22:39.998 lat (msec) : 100=6.26%, 250=5.30%, 500=0.18% 00:22:39.998 cpu : usr=98.89%, sys=0.41%, ctx=1165, majf=0, minf=5549 00:22:39.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:39.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.998 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:39.998 issued rwts: total=65190,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:39.998 second_half: (groupid=0, jobs=1): err= 0: pid=77324: Tue Nov 26 18:24:10 2024 00:22:39.998 read: IOPS=1936, BW=7745KiB/s (7931kB/s)(254MiB/33647msec) 00:22:39.998 slat (nsec): min=4729, max=75442, avg=8043.80, stdev=3041.23 00:22:39.998 clat (usec): min=846, max=328531, avg=44366.24, stdev=17555.70 00:22:39.998 lat (usec): min=855, max=328540, avg=44374.28, stdev=17556.01 00:22:39.998 clat percentiles (msec): 00:22:39.998 | 1.00th=[ 5], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 41], 00:22:39.998 | 30.00th=[ 41], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 43], 00:22:39.998 | 70.00th=[ 44], 80.00th=[ 47], 90.00th=[ 49], 95.00th=[ 55], 00:22:39.998 | 99.00th=[ 136], 99.50th=[ 182], 99.90th=[ 245], 99.95th=[ 271], 00:22:39.998 | 99.99th=[ 288] 00:22:39.998 write: IOPS=2951, BW=11.5MiB/s (12.1MB/s)(256MiB/22204msec); 0 zone resets 00:22:39.998 slat (usec): min=5, max=628, avg=11.11, stdev= 7.09 00:22:39.998 clat (usec): min=569, max=159530, avg=21582.94, stdev=36062.10 00:22:39.998 lat (usec): min=578, max=159538, avg=21594.04, stdev=36062.43 00:22:39.998 clat percentiles (usec): 00:22:39.998 | 1.00th=[ 1074], 5.00th=[ 1319], 10.00th=[ 1467], 20.00th=[ 1696], 00:22:39.998 | 30.00th=[ 1909], 40.00th=[ 2278], 50.00th=[ 4424], 60.00th=[ 8225], 00:22:39.998 | 70.00th=[ 13566], 80.00th=[ 17957], 90.00th=[100140], 95.00th=[108528], 00:22:39.998 | 99.00th=[119014], 99.50th=[122160], 99.90th=[129500], 99.95th=[137364], 00:22:39.998 | 99.99th=[156238] 00:22:39.998 bw ( KiB/s): min= 2064, max=35680, per=77.64%, avg=16912.52, stdev=9006.12, samples=31 00:22:39.998 iops : min= 516, max= 8920, avg=4228.13, stdev=2251.53, samples=31 00:22:39.998 lat (usec) : 750=0.01%, 1000=0.27% 00:22:39.998 lat (msec) : 2=16.45%, 4=8.22%, 10=8.17%, 20=9.21%, 50=45.72% 00:22:39.999 lat (msec) : 100=6.22%, 250=5.69%, 500=0.04% 00:22:39.999 cpu : usr=99.03%, sys=0.29%, ctx=342, majf=0, minf=5558 00:22:39.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:39.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.999 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:39.999 issued rwts: total=65147,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:39.999 00:22:39.999 Run status group 0 (all jobs): 00:22:39.999 READ: bw=15.1MiB/s (15.8MB/s), 7734KiB/s-7745KiB/s (7919kB/s-7931kB/s), io=509MiB (534MB), run=33647-33717msec 00:22:39.999 WRITE: bw=21.3MiB/s (22.3MB/s), 10.6MiB/s-11.5MiB/s (11.2MB/s-12.1MB/s), io=512MiB (537MB), run=22204-24070msec 00:22:39.999 ----------------------------------------------------- 00:22:39.999 Suppressions used: 00:22:39.999 count bytes template 00:22:39.999 2 10 /usr/src/fio/parse.c 00:22:39.999 1 96 /usr/src/fio/iolog.c 00:22:39.999 1 8 libtcmalloc_minimal.so 00:22:39.999 1 904 libcrypto.so 00:22:39.999 ----------------------------------------------------- 00:22:39.999 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:39.999 18:24:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:39.999 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:39.999 fio-3.35 00:22:39.999 Starting 1 thread 00:22:58.083 00:22:58.083 test: (groupid=0, jobs=1): err= 0: pid=77722: Tue Nov 26 18:24:31 2024 00:22:58.083 read: IOPS=6022, BW=23.5MiB/s (24.7MB/s)(255MiB/10826msec) 00:22:58.083 slat (nsec): min=4138, max=78574, avg=7239.85, stdev=3436.08 00:22:58.083 clat (usec): min=920, max=41931, avg=21240.28, stdev=1184.82 00:22:58.083 lat (usec): min=940, max=41937, avg=21247.52, stdev=1184.84 00:22:58.083 clat percentiles (usec): 00:22:58.083 | 1.00th=[19530], 5.00th=[20055], 10.00th=[20317], 20.00th=[20579], 00:22:58.083 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21103], 60.00th=[21365], 00:22:58.083 | 70.00th=[21627], 80.00th=[21627], 90.00th=[22152], 95.00th=[22676], 00:22:58.083 | 99.00th=[25297], 99.50th=[25822], 99.90th=[31327], 99.95th=[36963], 00:22:58.083 | 99.99th=[41157] 00:22:58.083 write: IOPS=11.0k, BW=43.0MiB/s (45.0MB/s)(256MiB/5959msec); 0 zone resets 00:22:58.083 slat (usec): min=5, max=528, avg=10.32, stdev= 7.39 00:22:58.083 clat (usec): min=707, max=72938, avg=11576.70, stdev=14368.08 00:22:58.083 lat (usec): min=715, max=72945, avg=11587.01, stdev=14368.09 00:22:58.083 clat percentiles (usec): 00:22:58.083 | 1.00th=[ 988], 5.00th=[ 1205], 10.00th=[ 1336], 20.00th=[ 1516], 00:22:58.083 | 30.00th=[ 1713], 40.00th=[ 2212], 50.00th=[ 8029], 60.00th=[ 8979], 00:22:58.083 | 70.00th=[10290], 80.00th=[11731], 90.00th=[41681], 95.00th=[45351], 00:22:58.083 | 99.00th=[49021], 99.50th=[50070], 99.90th=[52691], 99.95th=[58983], 00:22:58.083 | 99.99th=[69731] 00:22:58.083 bw ( KiB/s): min=32640, max=58656, per=99.32%, avg=43690.67, stdev=7881.51, samples=12 00:22:58.083 iops : min= 8160, max=14664, avg=10922.67, stdev=1970.36, samples=12 00:22:58.083 lat (usec) : 750=0.01%, 1000=0.57% 00:22:58.083 lat (msec) : 2=18.40%, 4=1.95%, 10=13.14%, 20=10.60%, 50=55.08% 00:22:58.083 lat (msec) : 100=0.26% 00:22:58.083 cpu : usr=98.55%, sys=0.58%, ctx=31, majf=0, minf=5565 00:22:58.083 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:58.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:58.083 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:58.083 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:58.083 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:58.083 00:22:58.083 Run status group 0 (all jobs): 00:22:58.083 READ: bw=23.5MiB/s (24.7MB/s), 23.5MiB/s-23.5MiB/s (24.7MB/s-24.7MB/s), io=255MiB (267MB), run=10826-10826msec 00:22:58.083 WRITE: bw=43.0MiB/s (45.0MB/s), 43.0MiB/s-43.0MiB/s (45.0MB/s-45.0MB/s), io=256MiB (268MB), run=5959-5959msec 00:22:58.341 ----------------------------------------------------- 00:22:58.341 Suppressions used: 00:22:58.341 count bytes template 00:22:58.341 1 5 /usr/src/fio/parse.c 00:22:58.341 2 192 /usr/src/fio/iolog.c 00:22:58.341 1 8 libtcmalloc_minimal.so 00:22:58.341 1 904 libcrypto.so 00:22:58.341 ----------------------------------------------------- 00:22:58.341 00:22:58.341 18:24:32 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:22:58.341 18:24:32 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.341 18:24:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:58.600 18:24:32 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:58.600 Remove shared memory files 00:22:58.600 18:24:32 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:22:58.600 18:24:32 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:58.600 18:24:32 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:22:58.600 18:24:32 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:22:58.600 18:24:32 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57944 /dev/shm/spdk_tgt_trace.pid75904 00:22:58.600 18:24:32 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:58.600 18:24:32 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:22:58.600 ************************************ 00:22:58.600 END TEST ftl_fio_basic 00:22:58.600 ************************************ 00:22:58.600 00:22:58.600 real 1m21.120s 00:22:58.600 user 3m3.188s 00:22:58.600 sys 0m4.714s 00:22:58.600 18:24:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:58.600 18:24:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:58.600 18:24:32 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:58.600 18:24:32 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:58.600 18:24:32 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:58.600 18:24:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:58.600 ************************************ 00:22:58.600 START TEST ftl_bdevperf 00:22:58.600 ************************************ 00:22:58.600 18:24:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:58.600 * Looking for test storage... 00:22:58.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:58.600 18:24:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:58.600 18:24:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:58.600 18:24:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:58.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.860 --rc genhtml_branch_coverage=1 00:22:58.860 --rc genhtml_function_coverage=1 00:22:58.860 --rc genhtml_legend=1 00:22:58.860 --rc geninfo_all_blocks=1 00:22:58.860 --rc geninfo_unexecuted_blocks=1 00:22:58.860 00:22:58.860 ' 00:22:58.860 18:24:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:58.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.861 --rc genhtml_branch_coverage=1 00:22:58.861 --rc genhtml_function_coverage=1 00:22:58.861 --rc genhtml_legend=1 00:22:58.861 --rc geninfo_all_blocks=1 00:22:58.861 --rc geninfo_unexecuted_blocks=1 00:22:58.861 00:22:58.861 ' 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:58.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.861 --rc genhtml_branch_coverage=1 00:22:58.861 --rc genhtml_function_coverage=1 00:22:58.861 --rc genhtml_legend=1 00:22:58.861 --rc geninfo_all_blocks=1 00:22:58.861 --rc geninfo_unexecuted_blocks=1 00:22:58.861 00:22:58.861 ' 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:58.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.861 --rc genhtml_branch_coverage=1 00:22:58.861 --rc genhtml_function_coverage=1 00:22:58.861 --rc genhtml_legend=1 00:22:58.861 --rc geninfo_all_blocks=1 00:22:58.861 --rc geninfo_unexecuted_blocks=1 00:22:58.861 00:22:58.861 ' 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77992 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77992 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77992 ']' 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.861 18:24:33 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:58.861 [2024-11-26 18:24:33.227236] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:22:58.861 [2024-11-26 18:24:33.227748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77992 ] 00:22:59.120 [2024-11-26 18:24:33.406699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.120 [2024-11-26 18:24:33.557009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.057 18:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.057 18:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:23:00.057 18:24:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:00.057 18:24:34 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:23:00.057 18:24:34 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:00.057 18:24:34 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:23:00.057 18:24:34 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:23:00.057 18:24:34 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:00.315 18:24:34 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:00.315 18:24:34 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:23:00.315 18:24:34 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:00.315 18:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:00.315 18:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:00.316 18:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:00.316 18:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:00.316 18:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:00.574 18:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:00.574 { 00:23:00.574 "name": "nvme0n1", 00:23:00.574 "aliases": [ 00:23:00.574 "0c63e995-420d-4058-b659-559e9e11506e" 00:23:00.574 ], 00:23:00.574 "product_name": "NVMe disk", 00:23:00.574 "block_size": 4096, 00:23:00.574 "num_blocks": 1310720, 00:23:00.574 "uuid": "0c63e995-420d-4058-b659-559e9e11506e", 00:23:00.574 "numa_id": -1, 00:23:00.574 "assigned_rate_limits": { 00:23:00.574 "rw_ios_per_sec": 0, 00:23:00.574 "rw_mbytes_per_sec": 0, 00:23:00.574 "r_mbytes_per_sec": 0, 00:23:00.574 "w_mbytes_per_sec": 0 00:23:00.574 }, 00:23:00.574 "claimed": true, 00:23:00.574 "claim_type": "read_many_write_one", 00:23:00.574 "zoned": false, 00:23:00.574 "supported_io_types": { 00:23:00.574 "read": true, 00:23:00.574 "write": true, 00:23:00.574 "unmap": true, 00:23:00.574 "flush": true, 00:23:00.574 "reset": true, 00:23:00.574 "nvme_admin": true, 00:23:00.574 "nvme_io": true, 00:23:00.574 "nvme_io_md": false, 00:23:00.574 "write_zeroes": true, 00:23:00.574 "zcopy": false, 00:23:00.574 "get_zone_info": false, 00:23:00.574 "zone_management": false, 00:23:00.574 "zone_append": false, 00:23:00.574 "compare": true, 00:23:00.574 "compare_and_write": false, 00:23:00.574 "abort": true, 00:23:00.574 "seek_hole": false, 00:23:00.574 "seek_data": false, 00:23:00.574 "copy": true, 00:23:00.574 "nvme_iov_md": false 00:23:00.574 }, 00:23:00.574 "driver_specific": { 00:23:00.574 "nvme": [ 00:23:00.574 { 00:23:00.574 "pci_address": "0000:00:11.0", 00:23:00.574 "trid": { 00:23:00.574 "trtype": "PCIe", 00:23:00.574 "traddr": "0000:00:11.0" 00:23:00.574 }, 00:23:00.574 "ctrlr_data": { 00:23:00.574 "cntlid": 0, 00:23:00.574 "vendor_id": "0x1b36", 00:23:00.574 "model_number": "QEMU NVMe Ctrl", 00:23:00.574 "serial_number": "12341", 00:23:00.574 "firmware_revision": "8.0.0", 00:23:00.574 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:00.574 "oacs": { 00:23:00.574 "security": 0, 00:23:00.574 "format": 1, 00:23:00.574 "firmware": 0, 00:23:00.575 "ns_manage": 1 00:23:00.575 }, 00:23:00.575 "multi_ctrlr": false, 00:23:00.575 "ana_reporting": false 00:23:00.575 }, 00:23:00.575 "vs": { 00:23:00.575 "nvme_version": "1.4" 00:23:00.575 }, 00:23:00.575 "ns_data": { 00:23:00.575 "id": 1, 00:23:00.575 "can_share": false 00:23:00.575 } 00:23:00.575 } 00:23:00.575 ], 00:23:00.575 "mp_policy": "active_passive" 00:23:00.575 } 00:23:00.575 } 00:23:00.575 ]' 00:23:00.575 18:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:00.575 18:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:00.575 18:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:00.575 18:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:00.575 18:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:00.575 18:24:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:23:00.575 18:24:34 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:23:00.575 18:24:34 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:00.575 18:24:34 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:23:00.575 18:24:34 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:00.575 18:24:34 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:00.840 18:24:35 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=4bd0aef1-9c84-463c-9003-6421619df3f4 00:23:00.840 18:24:35 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:23:00.840 18:24:35 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4bd0aef1-9c84-463c-9003-6421619df3f4 00:23:01.132 18:24:35 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:01.391 18:24:35 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=40e1cd11-f05f-4621-96c0-d23fad81268e 00:23:01.391 18:24:35 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 40e1cd11-f05f-4621-96c0-d23fad81268e 00:23:01.650 18:24:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=7b82a4c2-a05c-404e-8101-de947cc5a563 00:23:01.650 18:24:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7b82a4c2-a05c-404e-8101-de947cc5a563 00:23:01.650 18:24:35 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:23:01.650 18:24:35 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:01.650 18:24:35 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=7b82a4c2-a05c-404e-8101-de947cc5a563 00:23:01.650 18:24:35 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:23:01.650 18:24:35 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 7b82a4c2-a05c-404e-8101-de947cc5a563 00:23:01.650 18:24:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=7b82a4c2-a05c-404e-8101-de947cc5a563 00:23:01.650 18:24:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:01.650 18:24:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:01.650 18:24:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:01.650 18:24:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b82a4c2-a05c-404e-8101-de947cc5a563 00:23:01.910 18:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:01.910 { 00:23:01.910 "name": "7b82a4c2-a05c-404e-8101-de947cc5a563", 00:23:01.910 "aliases": [ 00:23:01.910 "lvs/nvme0n1p0" 00:23:01.910 ], 00:23:01.910 "product_name": "Logical Volume", 00:23:01.910 "block_size": 4096, 00:23:01.910 "num_blocks": 26476544, 00:23:01.910 "uuid": "7b82a4c2-a05c-404e-8101-de947cc5a563", 00:23:01.910 "assigned_rate_limits": { 00:23:01.910 "rw_ios_per_sec": 0, 00:23:01.910 "rw_mbytes_per_sec": 0, 00:23:01.910 "r_mbytes_per_sec": 0, 00:23:01.910 "w_mbytes_per_sec": 0 00:23:01.910 }, 00:23:01.910 "claimed": false, 00:23:01.910 "zoned": false, 00:23:01.910 "supported_io_types": { 00:23:01.910 "read": true, 00:23:01.910 "write": true, 00:23:01.910 "unmap": true, 00:23:01.910 "flush": false, 00:23:01.910 "reset": true, 00:23:01.910 "nvme_admin": false, 00:23:01.910 "nvme_io": false, 00:23:01.910 "nvme_io_md": false, 00:23:01.910 "write_zeroes": true, 00:23:01.910 "zcopy": false, 00:23:01.910 "get_zone_info": false, 00:23:01.910 "zone_management": false, 00:23:01.910 "zone_append": false, 00:23:01.910 "compare": false, 00:23:01.910 "compare_and_write": false, 00:23:01.910 "abort": false, 00:23:01.910 "seek_hole": true, 00:23:01.910 "seek_data": true, 00:23:01.910 "copy": false, 00:23:01.910 "nvme_iov_md": false 00:23:01.910 }, 00:23:01.910 "driver_specific": { 00:23:01.910 "lvol": { 00:23:01.910 "lvol_store_uuid": "40e1cd11-f05f-4621-96c0-d23fad81268e", 00:23:01.910 "base_bdev": "nvme0n1", 00:23:01.910 "thin_provision": true, 00:23:01.910 "num_allocated_clusters": 0, 00:23:01.910 "snapshot": false, 00:23:01.910 "clone": false, 00:23:01.910 "esnap_clone": false 00:23:01.910 } 00:23:01.910 } 00:23:01.910 } 00:23:01.910 ]' 00:23:01.910 18:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:01.910 18:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:01.910 18:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:01.910 18:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:01.910 18:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:01.910 18:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:23:01.910 18:24:36 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:23:01.910 18:24:36 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:23:01.910 18:24:36 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:02.477 18:24:36 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:02.477 18:24:36 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:02.477 18:24:36 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 7b82a4c2-a05c-404e-8101-de947cc5a563 00:23:02.477 18:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=7b82a4c2-a05c-404e-8101-de947cc5a563 00:23:02.477 18:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:02.477 18:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:02.477 18:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:02.477 18:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b82a4c2-a05c-404e-8101-de947cc5a563 00:23:02.736 18:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:02.736 { 00:23:02.736 "name": "7b82a4c2-a05c-404e-8101-de947cc5a563", 00:23:02.736 "aliases": [ 00:23:02.736 "lvs/nvme0n1p0" 00:23:02.736 ], 00:23:02.736 "product_name": "Logical Volume", 00:23:02.736 "block_size": 4096, 00:23:02.736 "num_blocks": 26476544, 00:23:02.736 "uuid": "7b82a4c2-a05c-404e-8101-de947cc5a563", 00:23:02.736 "assigned_rate_limits": { 00:23:02.736 "rw_ios_per_sec": 0, 00:23:02.736 "rw_mbytes_per_sec": 0, 00:23:02.736 "r_mbytes_per_sec": 0, 00:23:02.736 "w_mbytes_per_sec": 0 00:23:02.736 }, 00:23:02.736 "claimed": false, 00:23:02.736 "zoned": false, 00:23:02.736 "supported_io_types": { 00:23:02.736 "read": true, 00:23:02.736 "write": true, 00:23:02.736 "unmap": true, 00:23:02.736 "flush": false, 00:23:02.736 "reset": true, 00:23:02.736 "nvme_admin": false, 00:23:02.736 "nvme_io": false, 00:23:02.736 "nvme_io_md": false, 00:23:02.736 "write_zeroes": true, 00:23:02.736 "zcopy": false, 00:23:02.736 "get_zone_info": false, 00:23:02.736 "zone_management": false, 00:23:02.736 "zone_append": false, 00:23:02.736 "compare": false, 00:23:02.736 "compare_and_write": false, 00:23:02.736 "abort": false, 00:23:02.736 "seek_hole": true, 00:23:02.736 "seek_data": true, 00:23:02.736 "copy": false, 00:23:02.736 "nvme_iov_md": false 00:23:02.736 }, 00:23:02.736 "driver_specific": { 00:23:02.736 "lvol": { 00:23:02.736 "lvol_store_uuid": "40e1cd11-f05f-4621-96c0-d23fad81268e", 00:23:02.736 "base_bdev": "nvme0n1", 00:23:02.736 "thin_provision": true, 00:23:02.736 "num_allocated_clusters": 0, 00:23:02.736 "snapshot": false, 00:23:02.736 "clone": false, 00:23:02.736 "esnap_clone": false 00:23:02.736 } 00:23:02.736 } 00:23:02.736 } 00:23:02.736 ]' 00:23:02.736 18:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:02.736 18:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:02.736 18:24:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:02.736 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:02.736 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:02.736 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:23:02.736 18:24:37 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:23:02.736 18:24:37 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:02.995 18:24:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:23:02.995 18:24:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 7b82a4c2-a05c-404e-8101-de947cc5a563 00:23:02.995 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=7b82a4c2-a05c-404e-8101-de947cc5a563 00:23:02.995 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:02.995 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:23:02.995 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:23:02.995 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7b82a4c2-a05c-404e-8101-de947cc5a563 00:23:03.253 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:03.253 { 00:23:03.253 "name": "7b82a4c2-a05c-404e-8101-de947cc5a563", 00:23:03.253 "aliases": [ 00:23:03.253 "lvs/nvme0n1p0" 00:23:03.253 ], 00:23:03.253 "product_name": "Logical Volume", 00:23:03.253 "block_size": 4096, 00:23:03.253 "num_blocks": 26476544, 00:23:03.254 "uuid": "7b82a4c2-a05c-404e-8101-de947cc5a563", 00:23:03.254 "assigned_rate_limits": { 00:23:03.254 "rw_ios_per_sec": 0, 00:23:03.254 "rw_mbytes_per_sec": 0, 00:23:03.254 "r_mbytes_per_sec": 0, 00:23:03.254 "w_mbytes_per_sec": 0 00:23:03.254 }, 00:23:03.254 "claimed": false, 00:23:03.254 "zoned": false, 00:23:03.254 "supported_io_types": { 00:23:03.254 "read": true, 00:23:03.254 "write": true, 00:23:03.254 "unmap": true, 00:23:03.254 "flush": false, 00:23:03.254 "reset": true, 00:23:03.254 "nvme_admin": false, 00:23:03.254 "nvme_io": false, 00:23:03.254 "nvme_io_md": false, 00:23:03.254 "write_zeroes": true, 00:23:03.254 "zcopy": false, 00:23:03.254 "get_zone_info": false, 00:23:03.254 "zone_management": false, 00:23:03.254 "zone_append": false, 00:23:03.254 "compare": false, 00:23:03.254 "compare_and_write": false, 00:23:03.254 "abort": false, 00:23:03.254 "seek_hole": true, 00:23:03.254 "seek_data": true, 00:23:03.254 "copy": false, 00:23:03.254 "nvme_iov_md": false 00:23:03.254 }, 00:23:03.254 "driver_specific": { 00:23:03.254 "lvol": { 00:23:03.254 "lvol_store_uuid": "40e1cd11-f05f-4621-96c0-d23fad81268e", 00:23:03.254 "base_bdev": "nvme0n1", 00:23:03.254 "thin_provision": true, 00:23:03.254 "num_allocated_clusters": 0, 00:23:03.254 "snapshot": false, 00:23:03.254 "clone": false, 00:23:03.254 "esnap_clone": false 00:23:03.254 } 00:23:03.254 } 00:23:03.254 } 00:23:03.254 ]' 00:23:03.254 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:03.254 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:23:03.254 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:03.254 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:03.254 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:03.254 18:24:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:23:03.254 18:24:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:23:03.254 18:24:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7b82a4c2-a05c-404e-8101-de947cc5a563 -c nvc0n1p0 --l2p_dram_limit 20 00:23:03.512 [2024-11-26 18:24:37.951624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.512 [2024-11-26 18:24:37.951938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:03.512 [2024-11-26 18:24:37.951972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:03.512 [2024-11-26 18:24:37.951992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.512 [2024-11-26 18:24:37.952087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.512 [2024-11-26 18:24:37.952109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:03.512 [2024-11-26 18:24:37.952123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:03.512 [2024-11-26 18:24:37.952136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.512 [2024-11-26 18:24:37.952193] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:03.512 [2024-11-26 18:24:37.953431] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:03.512 [2024-11-26 18:24:37.953463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.512 [2024-11-26 18:24:37.953479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:03.512 [2024-11-26 18:24:37.953502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.279 ms 00:23:03.512 [2024-11-26 18:24:37.953516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.512 [2024-11-26 18:24:37.953653] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 70da5b9c-103a-49b9-a851-79d4d389bf70 00:23:03.512 [2024-11-26 18:24:37.955722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.512 [2024-11-26 18:24:37.955762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:03.512 [2024-11-26 18:24:37.955786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:03.512 [2024-11-26 18:24:37.955798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.512 [2024-11-26 18:24:37.966931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.512 [2024-11-26 18:24:37.967245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:03.512 [2024-11-26 18:24:37.967283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.044 ms 00:23:03.512 [2024-11-26 18:24:37.967309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.512 [2024-11-26 18:24:37.967457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.512 [2024-11-26 18:24:37.967490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:03.512 [2024-11-26 18:24:37.967510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:23:03.512 [2024-11-26 18:24:37.967520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.512 [2024-11-26 18:24:37.967719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.512 [2024-11-26 18:24:37.967740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:03.512 [2024-11-26 18:24:37.967755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:03.512 [2024-11-26 18:24:37.967766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.512 [2024-11-26 18:24:37.967818] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:03.772 [2024-11-26 18:24:37.973754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.772 [2024-11-26 18:24:37.973812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:03.772 [2024-11-26 18:24:37.973828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.966 ms 00:23:03.772 [2024-11-26 18:24:37.973844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.772 [2024-11-26 18:24:37.973882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.772 [2024-11-26 18:24:37.973900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:03.772 [2024-11-26 18:24:37.973912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:03.772 [2024-11-26 18:24:37.973925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.772 [2024-11-26 18:24:37.973978] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:03.772 [2024-11-26 18:24:37.974168] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:03.772 [2024-11-26 18:24:37.974203] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:03.772 [2024-11-26 18:24:37.974222] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:03.772 [2024-11-26 18:24:37.974236] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:03.772 [2024-11-26 18:24:37.974252] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:03.772 [2024-11-26 18:24:37.974263] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:03.772 [2024-11-26 18:24:37.974276] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:03.772 [2024-11-26 18:24:37.974287] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:03.772 [2024-11-26 18:24:37.974300] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:03.772 [2024-11-26 18:24:37.974314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.772 [2024-11-26 18:24:37.974328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:03.772 [2024-11-26 18:24:37.974355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:23:03.772 [2024-11-26 18:24:37.974370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.772 [2024-11-26 18:24:37.974458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.772 [2024-11-26 18:24:37.974474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:03.772 [2024-11-26 18:24:37.974502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:03.772 [2024-11-26 18:24:37.974518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.772 [2024-11-26 18:24:37.974666] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:03.772 [2024-11-26 18:24:37.974694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:03.772 [2024-11-26 18:24:37.974707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:03.772 [2024-11-26 18:24:37.974721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:03.772 [2024-11-26 18:24:37.974733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:03.772 [2024-11-26 18:24:37.974746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:03.772 [2024-11-26 18:24:37.974756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:03.772 [2024-11-26 18:24:37.974798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:03.772 [2024-11-26 18:24:37.974808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:03.772 [2024-11-26 18:24:37.974821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:03.772 [2024-11-26 18:24:37.974830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:03.772 [2024-11-26 18:24:37.974856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:03.772 [2024-11-26 18:24:37.974866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:03.772 [2024-11-26 18:24:37.974883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:03.772 [2024-11-26 18:24:37.974908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:03.772 [2024-11-26 18:24:37.974939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:03.772 [2024-11-26 18:24:37.974948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:03.772 [2024-11-26 18:24:37.974962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:03.772 [2024-11-26 18:24:37.974971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:03.772 [2024-11-26 18:24:37.974982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:03.772 [2024-11-26 18:24:37.975006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:03.772 [2024-11-26 18:24:37.975061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:03.772 [2024-11-26 18:24:37.975071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:03.772 [2024-11-26 18:24:37.975083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:03.772 [2024-11-26 18:24:37.975093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:03.772 [2024-11-26 18:24:37.975137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:03.772 [2024-11-26 18:24:37.975147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:03.772 [2024-11-26 18:24:37.975158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:03.772 [2024-11-26 18:24:37.975168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:03.772 [2024-11-26 18:24:37.975181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:03.772 [2024-11-26 18:24:37.975191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:03.772 [2024-11-26 18:24:37.975206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:03.772 [2024-11-26 18:24:37.975216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:03.772 [2024-11-26 18:24:37.975228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:03.772 [2024-11-26 18:24:37.975238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:03.772 [2024-11-26 18:24:37.975251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:03.772 [2024-11-26 18:24:37.975261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:03.773 [2024-11-26 18:24:37.975273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:03.773 [2024-11-26 18:24:37.975283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:03.773 [2024-11-26 18:24:37.975296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:03.773 [2024-11-26 18:24:37.975306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:03.773 [2024-11-26 18:24:37.975319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:03.773 [2024-11-26 18:24:37.975328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:03.773 [2024-11-26 18:24:37.975340] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:03.773 [2024-11-26 18:24:37.975352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:03.773 [2024-11-26 18:24:37.975366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:03.773 [2024-11-26 18:24:37.975377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:03.773 [2024-11-26 18:24:37.975393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:03.773 [2024-11-26 18:24:37.975403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:03.773 [2024-11-26 18:24:37.975428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:03.773 [2024-11-26 18:24:37.975451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:03.773 [2024-11-26 18:24:37.975467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:03.773 [2024-11-26 18:24:37.975479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:03.773 [2024-11-26 18:24:37.975503] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:03.773 [2024-11-26 18:24:37.975518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:03.773 [2024-11-26 18:24:37.975533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:03.773 [2024-11-26 18:24:37.975545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:03.773 [2024-11-26 18:24:37.975587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:03.773 [2024-11-26 18:24:37.975600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:03.773 [2024-11-26 18:24:37.975613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:03.773 [2024-11-26 18:24:37.975625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:03.773 [2024-11-26 18:24:37.975638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:03.773 [2024-11-26 18:24:37.975648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:03.773 [2024-11-26 18:24:37.975664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:03.773 [2024-11-26 18:24:37.975689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:03.773 [2024-11-26 18:24:37.975701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:03.773 [2024-11-26 18:24:37.975712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:03.773 [2024-11-26 18:24:37.975724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:03.773 [2024-11-26 18:24:37.975734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:03.773 [2024-11-26 18:24:37.975749] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:03.773 [2024-11-26 18:24:37.975761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:03.773 [2024-11-26 18:24:37.975778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:03.773 [2024-11-26 18:24:37.975789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:03.773 [2024-11-26 18:24:37.975802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:03.773 [2024-11-26 18:24:37.975812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:03.773 [2024-11-26 18:24:37.975826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:03.773 [2024-11-26 18:24:37.975837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:03.773 [2024-11-26 18:24:37.975850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.224 ms 00:23:03.773 [2024-11-26 18:24:37.975860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:03.773 [2024-11-26 18:24:37.975912] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:03.773 [2024-11-26 18:24:37.975950] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:07.058 [2024-11-26 18:24:41.268799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.058 [2024-11-26 18:24:41.268868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:07.058 [2024-11-26 18:24:41.268908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3292.903 ms 00:23:07.058 [2024-11-26 18:24:41.268930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.058 [2024-11-26 18:24:41.305238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.058 [2024-11-26 18:24:41.305296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:07.058 [2024-11-26 18:24:41.305336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.057 ms 00:23:07.058 [2024-11-26 18:24:41.305348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.058 [2024-11-26 18:24:41.305553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.059 [2024-11-26 18:24:41.305587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:07.059 [2024-11-26 18:24:41.305622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:07.059 [2024-11-26 18:24:41.305648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.059 [2024-11-26 18:24:41.352848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.059 [2024-11-26 18:24:41.352920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:07.059 [2024-11-26 18:24:41.352973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.143 ms 00:23:07.059 [2024-11-26 18:24:41.352985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.059 [2024-11-26 18:24:41.353043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.059 [2024-11-26 18:24:41.353058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:07.059 [2024-11-26 18:24:41.353072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:07.059 [2024-11-26 18:24:41.353084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.059 [2024-11-26 18:24:41.353732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.059 [2024-11-26 18:24:41.353754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:07.059 [2024-11-26 18:24:41.353770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:23:07.059 [2024-11-26 18:24:41.353782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.059 [2024-11-26 18:24:41.353936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.059 [2024-11-26 18:24:41.353968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:07.059 [2024-11-26 18:24:41.353985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:23:07.059 [2024-11-26 18:24:41.353995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.059 [2024-11-26 18:24:41.371609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.059 [2024-11-26 18:24:41.371647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:07.059 [2024-11-26 18:24:41.371681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.586 ms 00:23:07.059 [2024-11-26 18:24:41.371705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.059 [2024-11-26 18:24:41.384464] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:23:07.059 [2024-11-26 18:24:41.391850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.059 [2024-11-26 18:24:41.391904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:07.059 [2024-11-26 18:24:41.391919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.043 ms 00:23:07.059 [2024-11-26 18:24:41.391931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.059 [2024-11-26 18:24:41.469819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.059 [2024-11-26 18:24:41.469908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:07.059 [2024-11-26 18:24:41.469929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.853 ms 00:23:07.059 [2024-11-26 18:24:41.469959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.059 [2024-11-26 18:24:41.470161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.059 [2024-11-26 18:24:41.470186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:07.059 [2024-11-26 18:24:41.470199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:23:07.059 [2024-11-26 18:24:41.470216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.059 [2024-11-26 18:24:41.494939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.059 [2024-11-26 18:24:41.494983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:07.059 [2024-11-26 18:24:41.494999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.669 ms 00:23:07.059 [2024-11-26 18:24:41.495012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.317 [2024-11-26 18:24:41.519381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.317 [2024-11-26 18:24:41.519569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:07.317 [2024-11-26 18:24:41.519595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.330 ms 00:23:07.317 [2024-11-26 18:24:41.519609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.317 [2024-11-26 18:24:41.520407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.317 [2024-11-26 18:24:41.520433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:07.317 [2024-11-26 18:24:41.520446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:23:07.317 [2024-11-26 18:24:41.520458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.317 [2024-11-26 18:24:41.595388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.317 [2024-11-26 18:24:41.595586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:07.317 [2024-11-26 18:24:41.595613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.891 ms 00:23:07.317 [2024-11-26 18:24:41.595627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.317 [2024-11-26 18:24:41.621902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.317 [2024-11-26 18:24:41.621944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:07.317 [2024-11-26 18:24:41.621962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.189 ms 00:23:07.317 [2024-11-26 18:24:41.621975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.317 [2024-11-26 18:24:41.646250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.317 [2024-11-26 18:24:41.646291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:07.317 [2024-11-26 18:24:41.646305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.237 ms 00:23:07.317 [2024-11-26 18:24:41.646316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.317 [2024-11-26 18:24:41.671458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.317 [2024-11-26 18:24:41.671636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:07.317 [2024-11-26 18:24:41.671750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.105 ms 00:23:07.317 [2024-11-26 18:24:41.671799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.317 [2024-11-26 18:24:41.671850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.317 [2024-11-26 18:24:41.671873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:07.317 [2024-11-26 18:24:41.671885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:07.317 [2024-11-26 18:24:41.671898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.317 [2024-11-26 18:24:41.672010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:07.317 [2024-11-26 18:24:41.672031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:07.317 [2024-11-26 18:24:41.672042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:07.317 [2024-11-26 18:24:41.672054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:07.317 [2024-11-26 18:24:41.673482] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3721.402 ms, result 0 00:23:07.317 { 00:23:07.317 "name": "ftl0", 00:23:07.317 "uuid": "70da5b9c-103a-49b9-a851-79d4d389bf70" 00:23:07.317 } 00:23:07.317 18:24:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:23:07.317 18:24:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:23:07.317 18:24:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:23:07.610 18:24:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:23:07.868 [2024-11-26 18:24:42.141461] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:07.868 I/O size of 69632 is greater than zero copy threshold (65536). 00:23:07.868 Zero copy mechanism will not be used. 00:23:07.868 Running I/O for 4 seconds... 00:23:09.731 1560.00 IOPS, 103.59 MiB/s [2024-11-26T18:24:45.565Z] 1578.00 IOPS, 104.79 MiB/s [2024-11-26T18:24:46.501Z] 1601.67 IOPS, 106.36 MiB/s [2024-11-26T18:24:46.501Z] 1615.00 IOPS, 107.25 MiB/s 00:23:12.040 Latency(us) 00:23:12.040 [2024-11-26T18:24:46.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:12.040 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:23:12.040 ftl0 : 4.00 1614.39 107.21 0.00 0.00 649.84 269.96 2427.81 00:23:12.040 [2024-11-26T18:24:46.501Z] =================================================================================================================== 00:23:12.040 [2024-11-26T18:24:46.501Z] Total : 1614.39 107.21 0.00 0.00 649.84 269.96 2427.81 00:23:12.040 [2024-11-26 18:24:46.153163] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:12.040 { 00:23:12.040 "results": [ 00:23:12.040 { 00:23:12.040 "job": "ftl0", 00:23:12.040 "core_mask": "0x1", 00:23:12.040 "workload": "randwrite", 00:23:12.040 "status": "finished", 00:23:12.040 "queue_depth": 1, 00:23:12.040 "io_size": 69632, 00:23:12.040 "runtime": 4.002125, 00:23:12.040 "iops": 1614.3923540619046, 00:23:12.040 "mibps": 107.20574226192335, 00:23:12.040 "io_failed": 0, 00:23:12.040 "io_timeout": 0, 00:23:12.040 "avg_latency_us": 649.8416873267578, 00:23:12.040 "min_latency_us": 269.96363636363634, 00:23:12.040 "max_latency_us": 2427.8109090909093 00:23:12.040 } 00:23:12.040 ], 00:23:12.040 "core_count": 1 00:23:12.040 } 00:23:12.040 18:24:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:23:12.040 [2024-11-26 18:24:46.312595] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:12.040 Running I/O for 4 seconds... 00:23:13.913 7978.00 IOPS, 31.16 MiB/s [2024-11-26T18:24:49.342Z] 7527.00 IOPS, 29.40 MiB/s [2024-11-26T18:24:50.719Z] 7432.33 IOPS, 29.03 MiB/s [2024-11-26T18:24:50.719Z] 7496.75 IOPS, 29.28 MiB/s 00:23:16.258 Latency(us) 00:23:16.258 [2024-11-26T18:24:50.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.258 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:23:16.258 ftl0 : 4.02 7486.40 29.24 0.00 0.00 17047.68 322.09 32410.53 00:23:16.258 [2024-11-26T18:24:50.719Z] =================================================================================================================== 00:23:16.258 [2024-11-26T18:24:50.719Z] Total : 7486.40 29.24 0.00 0.00 17047.68 0.00 32410.53 00:23:16.258 { 00:23:16.258 "results": [ 00:23:16.258 { 00:23:16.258 "job": "ftl0", 00:23:16.258 "core_mask": "0x1", 00:23:16.258 "workload": "randwrite", 00:23:16.258 "status": "finished", 00:23:16.258 "queue_depth": 128, 00:23:16.258 "io_size": 4096, 00:23:16.258 "runtime": 4.02236, 00:23:16.258 "iops": 7486.40101830766, 00:23:16.258 "mibps": 29.243753977764296, 00:23:16.258 "io_failed": 0, 00:23:16.258 "io_timeout": 0, 00:23:16.258 "avg_latency_us": 17047.680651123195, 00:23:16.258 "min_latency_us": 322.0945454545455, 00:23:16.258 "max_latency_us": 32410.53090909091 00:23:16.258 } 00:23:16.258 ], 00:23:16.258 "core_count": 1 00:23:16.258 } 00:23:16.258 [2024-11-26 18:24:50.346601] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:16.258 18:24:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:23:16.258 [2024-11-26 18:24:50.499356] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:16.258 Running I/O for 4 seconds... 00:23:18.133 4880.00 IOPS, 19.06 MiB/s [2024-11-26T18:24:53.530Z] 4867.50 IOPS, 19.01 MiB/s [2024-11-26T18:24:54.905Z] 4904.67 IOPS, 19.16 MiB/s [2024-11-26T18:24:54.905Z] 4995.25 IOPS, 19.51 MiB/s 00:23:20.444 Latency(us) 00:23:20.444 [2024-11-26T18:24:54.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.444 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:20.444 Verification LBA range: start 0x0 length 0x1400000 00:23:20.444 ftl0 : 4.01 5009.24 19.57 0.00 0.00 25460.00 381.67 34078.72 00:23:20.444 [2024-11-26T18:24:54.905Z] =================================================================================================================== 00:23:20.444 [2024-11-26T18:24:54.905Z] Total : 5009.24 19.57 0.00 0.00 25460.00 0.00 34078.72 00:23:20.444 [2024-11-26 18:24:54.531525] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:20.444 { 00:23:20.444 "results": [ 00:23:20.444 { 00:23:20.444 "job": "ftl0", 00:23:20.444 "core_mask": "0x1", 00:23:20.444 "workload": "verify", 00:23:20.444 "status": "finished", 00:23:20.444 "verify_range": { 00:23:20.444 "start": 0, 00:23:20.444 "length": 20971520 00:23:20.444 }, 00:23:20.444 "queue_depth": 128, 00:23:20.444 "io_size": 4096, 00:23:20.444 "runtime": 4.014383, 00:23:20.444 "iops": 5009.238032345194, 00:23:20.444 "mibps": 19.567336063848416, 00:23:20.444 "io_failed": 0, 00:23:20.444 "io_timeout": 0, 00:23:20.444 "avg_latency_us": 25459.99736490671, 00:23:20.444 "min_latency_us": 381.6727272727273, 00:23:20.444 "max_latency_us": 34078.72 00:23:20.444 } 00:23:20.444 ], 00:23:20.444 "core_count": 1 00:23:20.444 } 00:23:20.444 18:24:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:23:20.444 [2024-11-26 18:24:54.812706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.444 [2024-11-26 18:24:54.812800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:20.444 [2024-11-26 18:24:54.812821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:20.444 [2024-11-26 18:24:54.812834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.445 [2024-11-26 18:24:54.812865] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:20.445 [2024-11-26 18:24:54.816293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.445 [2024-11-26 18:24:54.816325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:20.445 [2024-11-26 18:24:54.816358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.402 ms 00:23:20.445 [2024-11-26 18:24:54.816384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.445 [2024-11-26 18:24:54.818429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.445 [2024-11-26 18:24:54.818534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:20.445 [2024-11-26 18:24:54.818569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.014 ms 00:23:20.445 [2024-11-26 18:24:54.818583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.703 [2024-11-26 18:24:55.013852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.703 [2024-11-26 18:24:55.013950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:20.703 [2024-11-26 18:24:55.014014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 195.238 ms 00:23:20.703 [2024-11-26 18:24:55.014027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.703 [2024-11-26 18:24:55.020890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.703 [2024-11-26 18:24:55.020924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:20.703 [2024-11-26 18:24:55.020956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.812 ms 00:23:20.703 [2024-11-26 18:24:55.021003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.703 [2024-11-26 18:24:55.051796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.703 [2024-11-26 18:24:55.051837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:20.703 [2024-11-26 18:24:55.051871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.729 ms 00:23:20.703 [2024-11-26 18:24:55.051897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.703 [2024-11-26 18:24:55.069714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.703 [2024-11-26 18:24:55.069757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:20.703 [2024-11-26 18:24:55.069792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.771 ms 00:23:20.703 [2024-11-26 18:24:55.069803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.703 [2024-11-26 18:24:55.069955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.703 [2024-11-26 18:24:55.069976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:20.703 [2024-11-26 18:24:55.070009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:23:20.704 [2024-11-26 18:24:55.070020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.704 [2024-11-26 18:24:55.100121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.704 [2024-11-26 18:24:55.100166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:20.704 [2024-11-26 18:24:55.100202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.075 ms 00:23:20.704 [2024-11-26 18:24:55.100214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.704 [2024-11-26 18:24:55.129632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.704 [2024-11-26 18:24:55.129672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:20.704 [2024-11-26 18:24:55.129706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.369 ms 00:23:20.704 [2024-11-26 18:24:55.129717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.704 [2024-11-26 18:24:55.160652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.704 [2024-11-26 18:24:55.160683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:20.704 [2024-11-26 18:24:55.160700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.889 ms 00:23:20.704 [2024-11-26 18:24:55.160710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.964 [2024-11-26 18:24:55.188847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.964 [2024-11-26 18:24:55.188887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:20.964 [2024-11-26 18:24:55.188924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.033 ms 00:23:20.964 [2024-11-26 18:24:55.188934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.965 [2024-11-26 18:24:55.188995] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:20.965 [2024-11-26 18:24:55.189019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.189997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:20.965 [2024-11-26 18:24:55.190281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:20.966 [2024-11-26 18:24:55.190295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:20.966 [2024-11-26 18:24:55.190307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:20.966 [2024-11-26 18:24:55.190321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:20.966 [2024-11-26 18:24:55.190333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:20.966 [2024-11-26 18:24:55.190346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:20.966 [2024-11-26 18:24:55.190358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:20.966 [2024-11-26 18:24:55.190374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:20.966 [2024-11-26 18:24:55.190386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:20.966 [2024-11-26 18:24:55.190400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:20.966 [2024-11-26 18:24:55.190412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:20.966 [2024-11-26 18:24:55.190427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:20.966 [2024-11-26 18:24:55.190439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:20.966 [2024-11-26 18:24:55.190452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:20.966 [2024-11-26 18:24:55.190473] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:20.966 [2024-11-26 18:24:55.190494] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 70da5b9c-103a-49b9-a851-79d4d389bf70 00:23:20.966 [2024-11-26 18:24:55.190546] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:20.966 [2024-11-26 18:24:55.190566] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:20.966 [2024-11-26 18:24:55.190586] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:20.966 [2024-11-26 18:24:55.190603] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:20.966 [2024-11-26 18:24:55.190614] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:20.966 [2024-11-26 18:24:55.190628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:20.966 [2024-11-26 18:24:55.190639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:20.966 [2024-11-26 18:24:55.190655] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:20.966 [2024-11-26 18:24:55.190665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:20.966 [2024-11-26 18:24:55.190679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.966 [2024-11-26 18:24:55.190691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:20.966 [2024-11-26 18:24:55.190706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.687 ms 00:23:20.966 [2024-11-26 18:24:55.190717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.966 [2024-11-26 18:24:55.206979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.966 [2024-11-26 18:24:55.207188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:20.966 [2024-11-26 18:24:55.207220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.199 ms 00:23:20.966 [2024-11-26 18:24:55.207234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.966 [2024-11-26 18:24:55.207789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.966 [2024-11-26 18:24:55.207813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:20.966 [2024-11-26 18:24:55.207829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:23:20.966 [2024-11-26 18:24:55.207840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.966 [2024-11-26 18:24:55.251919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.966 [2024-11-26 18:24:55.251970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:20.966 [2024-11-26 18:24:55.252006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.966 [2024-11-26 18:24:55.252017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.966 [2024-11-26 18:24:55.252084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.966 [2024-11-26 18:24:55.252099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:20.966 [2024-11-26 18:24:55.252112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.966 [2024-11-26 18:24:55.252123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.966 [2024-11-26 18:24:55.252229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.966 [2024-11-26 18:24:55.252251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:20.966 [2024-11-26 18:24:55.252265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.966 [2024-11-26 18:24:55.252276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.966 [2024-11-26 18:24:55.252300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.966 [2024-11-26 18:24:55.252313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:20.966 [2024-11-26 18:24:55.252344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.966 [2024-11-26 18:24:55.252355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.966 [2024-11-26 18:24:55.352933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.966 [2024-11-26 18:24:55.353023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:20.966 [2024-11-26 18:24:55.353063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.966 [2024-11-26 18:24:55.353074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.225 [2024-11-26 18:24:55.428696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.225 [2024-11-26 18:24:55.428997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:21.225 [2024-11-26 18:24:55.429032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.225 [2024-11-26 18:24:55.429045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.225 [2024-11-26 18:24:55.429216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.225 [2024-11-26 18:24:55.429236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:21.225 [2024-11-26 18:24:55.429251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.225 [2024-11-26 18:24:55.429262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.225 [2024-11-26 18:24:55.429335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.225 [2024-11-26 18:24:55.429367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:21.225 [2024-11-26 18:24:55.429396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.225 [2024-11-26 18:24:55.429406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.225 [2024-11-26 18:24:55.429544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.225 [2024-11-26 18:24:55.429565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:21.225 [2024-11-26 18:24:55.429582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.225 [2024-11-26 18:24:55.429592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.225 [2024-11-26 18:24:55.429718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.225 [2024-11-26 18:24:55.429752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:21.225 [2024-11-26 18:24:55.429767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.225 [2024-11-26 18:24:55.429778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.225 [2024-11-26 18:24:55.429828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.225 [2024-11-26 18:24:55.429846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:21.225 [2024-11-26 18:24:55.429865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.225 [2024-11-26 18:24:55.429893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.225 [2024-11-26 18:24:55.429955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:21.225 [2024-11-26 18:24:55.429970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:21.225 [2024-11-26 18:24:55.430016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:21.225 [2024-11-26 18:24:55.430027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.225 [2024-11-26 18:24:55.430214] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 617.466 ms, result 0 00:23:21.225 true 00:23:21.225 18:24:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77992 00:23:21.225 18:24:55 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77992 ']' 00:23:21.225 18:24:55 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77992 00:23:21.225 18:24:55 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:23:21.225 18:24:55 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.225 18:24:55 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77992 00:23:21.225 killing process with pid 77992 00:23:21.225 Received shutdown signal, test time was about 4.000000 seconds 00:23:21.225 00:23:21.225 Latency(us) 00:23:21.225 [2024-11-26T18:24:55.686Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.225 [2024-11-26T18:24:55.686Z] =================================================================================================================== 00:23:21.225 [2024-11-26T18:24:55.686Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:21.225 18:24:55 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:21.225 18:24:55 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:21.225 18:24:55 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77992' 00:23:21.225 18:24:55 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77992 00:23:21.225 18:24:55 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77992 00:23:23.129 Remove shared memory files 00:23:23.129 18:24:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:23.129 18:24:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:23:23.129 18:24:57 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:23.129 18:24:57 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:23:23.129 18:24:57 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:23:23.129 18:24:57 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:23:23.129 18:24:57 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:23.129 18:24:57 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:23:23.129 ************************************ 00:23:23.129 END TEST ftl_bdevperf 00:23:23.129 ************************************ 00:23:23.129 00:23:23.129 real 0m24.232s 00:23:23.129 user 0m27.846s 00:23:23.129 sys 0m1.279s 00:23:23.129 18:24:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.129 18:24:57 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:23.129 18:24:57 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:23.129 18:24:57 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:23.129 18:24:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.129 18:24:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:23.129 ************************************ 00:23:23.129 START TEST ftl_trim 00:23:23.129 ************************************ 00:23:23.129 18:24:57 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:23.129 * Looking for test storage... 00:23:23.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:23.129 18:24:57 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:23.129 18:24:57 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:23:23.129 18:24:57 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:23.129 18:24:57 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.129 18:24:57 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:23:23.129 18:24:57 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.129 18:24:57 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:23.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.129 --rc genhtml_branch_coverage=1 00:23:23.129 --rc genhtml_function_coverage=1 00:23:23.129 --rc genhtml_legend=1 00:23:23.129 --rc geninfo_all_blocks=1 00:23:23.129 --rc geninfo_unexecuted_blocks=1 00:23:23.129 00:23:23.129 ' 00:23:23.129 18:24:57 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:23.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.129 --rc genhtml_branch_coverage=1 00:23:23.129 --rc genhtml_function_coverage=1 00:23:23.129 --rc genhtml_legend=1 00:23:23.129 --rc geninfo_all_blocks=1 00:23:23.129 --rc geninfo_unexecuted_blocks=1 00:23:23.129 00:23:23.129 ' 00:23:23.129 18:24:57 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:23.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.129 --rc genhtml_branch_coverage=1 00:23:23.129 --rc genhtml_function_coverage=1 00:23:23.129 --rc genhtml_legend=1 00:23:23.129 --rc geninfo_all_blocks=1 00:23:23.129 --rc geninfo_unexecuted_blocks=1 00:23:23.129 00:23:23.129 ' 00:23:23.129 18:24:57 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:23.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.129 --rc genhtml_branch_coverage=1 00:23:23.129 --rc genhtml_function_coverage=1 00:23:23.129 --rc genhtml_legend=1 00:23:23.129 --rc geninfo_all_blocks=1 00:23:23.129 --rc geninfo_unexecuted_blocks=1 00:23:23.129 00:23:23.129 ' 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78353 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:23:23.129 18:24:57 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78353 00:23:23.129 18:24:57 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78353 ']' 00:23:23.129 18:24:57 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.130 18:24:57 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.130 18:24:57 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.130 18:24:57 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.130 18:24:57 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:23.130 [2024-11-26 18:24:57.547763] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:23:23.130 [2024-11-26 18:24:57.547946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78353 ] 00:23:23.388 [2024-11-26 18:24:57.732397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:23.647 [2024-11-26 18:24:57.848408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.647 [2024-11-26 18:24:57.848530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.647 [2024-11-26 18:24:57.848598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.645 18:24:58 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.645 18:24:58 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:24.645 18:24:58 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:24.645 18:24:58 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:23:24.645 18:24:58 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:24.645 18:24:58 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:23:24.645 18:24:58 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:23:24.645 18:24:58 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:24.916 18:24:59 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:24.917 18:24:59 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:23:24.917 18:24:59 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:24.917 18:24:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:24.917 18:24:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:24.917 18:24:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:24.917 18:24:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:24.917 18:24:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:24.917 18:24:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:24.917 { 00:23:24.917 "name": "nvme0n1", 00:23:24.917 "aliases": [ 00:23:24.917 "cc7c8ab0-d10c-4a27-9d15-91fca1075f0b" 00:23:24.917 ], 00:23:24.917 "product_name": "NVMe disk", 00:23:24.917 "block_size": 4096, 00:23:24.917 "num_blocks": 1310720, 00:23:24.917 "uuid": "cc7c8ab0-d10c-4a27-9d15-91fca1075f0b", 00:23:24.917 "numa_id": -1, 00:23:24.917 "assigned_rate_limits": { 00:23:24.917 "rw_ios_per_sec": 0, 00:23:24.917 "rw_mbytes_per_sec": 0, 00:23:24.917 "r_mbytes_per_sec": 0, 00:23:24.917 "w_mbytes_per_sec": 0 00:23:24.917 }, 00:23:24.917 "claimed": true, 00:23:24.917 "claim_type": "read_many_write_one", 00:23:24.917 "zoned": false, 00:23:24.917 "supported_io_types": { 00:23:24.917 "read": true, 00:23:24.917 "write": true, 00:23:24.917 "unmap": true, 00:23:24.917 "flush": true, 00:23:24.917 "reset": true, 00:23:24.917 "nvme_admin": true, 00:23:24.917 "nvme_io": true, 00:23:24.917 "nvme_io_md": false, 00:23:24.917 "write_zeroes": true, 00:23:24.917 "zcopy": false, 00:23:24.917 "get_zone_info": false, 00:23:24.917 "zone_management": false, 00:23:24.917 "zone_append": false, 00:23:24.918 "compare": true, 00:23:24.918 "compare_and_write": false, 00:23:24.918 "abort": true, 00:23:24.918 "seek_hole": false, 00:23:24.918 "seek_data": false, 00:23:24.918 "copy": true, 00:23:24.918 "nvme_iov_md": false 00:23:24.918 }, 00:23:24.918 "driver_specific": { 00:23:24.918 "nvme": [ 00:23:24.918 { 00:23:24.918 "pci_address": "0000:00:11.0", 00:23:24.918 "trid": { 00:23:24.918 "trtype": "PCIe", 00:23:24.918 "traddr": "0000:00:11.0" 00:23:24.918 }, 00:23:24.918 "ctrlr_data": { 00:23:24.918 "cntlid": 0, 00:23:24.918 "vendor_id": "0x1b36", 00:23:24.918 "model_number": "QEMU NVMe Ctrl", 00:23:24.918 "serial_number": "12341", 00:23:24.918 "firmware_revision": "8.0.0", 00:23:24.918 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:24.918 "oacs": { 00:23:24.918 "security": 0, 00:23:24.918 "format": 1, 00:23:24.918 "firmware": 0, 00:23:24.918 "ns_manage": 1 00:23:24.918 }, 00:23:24.918 "multi_ctrlr": false, 00:23:24.918 "ana_reporting": false 00:23:24.918 }, 00:23:24.918 "vs": { 00:23:24.918 "nvme_version": "1.4" 00:23:24.918 }, 00:23:24.918 "ns_data": { 00:23:24.918 "id": 1, 00:23:24.918 "can_share": false 00:23:24.918 } 00:23:24.918 } 00:23:24.918 ], 00:23:24.918 "mp_policy": "active_passive" 00:23:24.918 } 00:23:24.918 } 00:23:24.918 ]' 00:23:24.918 18:24:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:25.184 18:24:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:25.184 18:24:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:25.184 18:24:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:25.184 18:24:59 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:25.184 18:24:59 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:23:25.184 18:24:59 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:23:25.184 18:24:59 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:25.184 18:24:59 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:23:25.184 18:24:59 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:25.184 18:24:59 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:25.442 18:24:59 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=40e1cd11-f05f-4621-96c0-d23fad81268e 00:23:25.442 18:24:59 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:23:25.442 18:24:59 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 40e1cd11-f05f-4621-96c0-d23fad81268e 00:23:25.700 18:24:59 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:25.958 18:25:00 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=3004a13b-69f0-4cea-b22f-38e0a33c9c89 00:23:25.958 18:25:00 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 3004a13b-69f0-4cea-b22f-38e0a33c9c89 00:23:26.218 18:25:00 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=7550dae3-4ee8-4974-8dae-0b11b2fbf432 00:23:26.218 18:25:00 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7550dae3-4ee8-4974-8dae-0b11b2fbf432 00:23:26.218 18:25:00 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:23:26.218 18:25:00 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:26.218 18:25:00 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=7550dae3-4ee8-4974-8dae-0b11b2fbf432 00:23:26.218 18:25:00 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:23:26.218 18:25:00 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 7550dae3-4ee8-4974-8dae-0b11b2fbf432 00:23:26.218 18:25:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7550dae3-4ee8-4974-8dae-0b11b2fbf432 00:23:26.218 18:25:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:26.218 18:25:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:26.218 18:25:00 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:26.218 18:25:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7550dae3-4ee8-4974-8dae-0b11b2fbf432 00:23:26.477 18:25:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:26.477 { 00:23:26.477 "name": "7550dae3-4ee8-4974-8dae-0b11b2fbf432", 00:23:26.477 "aliases": [ 00:23:26.477 "lvs/nvme0n1p0" 00:23:26.477 ], 00:23:26.477 "product_name": "Logical Volume", 00:23:26.477 "block_size": 4096, 00:23:26.477 "num_blocks": 26476544, 00:23:26.477 "uuid": "7550dae3-4ee8-4974-8dae-0b11b2fbf432", 00:23:26.477 "assigned_rate_limits": { 00:23:26.477 "rw_ios_per_sec": 0, 00:23:26.477 "rw_mbytes_per_sec": 0, 00:23:26.477 "r_mbytes_per_sec": 0, 00:23:26.477 "w_mbytes_per_sec": 0 00:23:26.477 }, 00:23:26.477 "claimed": false, 00:23:26.477 "zoned": false, 00:23:26.477 "supported_io_types": { 00:23:26.477 "read": true, 00:23:26.477 "write": true, 00:23:26.477 "unmap": true, 00:23:26.477 "flush": false, 00:23:26.477 "reset": true, 00:23:26.477 "nvme_admin": false, 00:23:26.477 "nvme_io": false, 00:23:26.477 "nvme_io_md": false, 00:23:26.477 "write_zeroes": true, 00:23:26.477 "zcopy": false, 00:23:26.477 "get_zone_info": false, 00:23:26.477 "zone_management": false, 00:23:26.477 "zone_append": false, 00:23:26.477 "compare": false, 00:23:26.477 "compare_and_write": false, 00:23:26.477 "abort": false, 00:23:26.477 "seek_hole": true, 00:23:26.477 "seek_data": true, 00:23:26.477 "copy": false, 00:23:26.477 "nvme_iov_md": false 00:23:26.477 }, 00:23:26.477 "driver_specific": { 00:23:26.477 "lvol": { 00:23:26.477 "lvol_store_uuid": "3004a13b-69f0-4cea-b22f-38e0a33c9c89", 00:23:26.477 "base_bdev": "nvme0n1", 00:23:26.477 "thin_provision": true, 00:23:26.477 "num_allocated_clusters": 0, 00:23:26.477 "snapshot": false, 00:23:26.477 "clone": false, 00:23:26.477 "esnap_clone": false 00:23:26.477 } 00:23:26.477 } 00:23:26.477 } 00:23:26.477 ]' 00:23:26.477 18:25:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:26.477 18:25:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:26.477 18:25:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:26.477 18:25:00 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:26.477 18:25:00 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:26.477 18:25:00 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:26.477 18:25:00 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:23:26.477 18:25:00 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:23:26.477 18:25:00 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:27.045 18:25:01 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:27.045 18:25:01 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:27.045 18:25:01 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 7550dae3-4ee8-4974-8dae-0b11b2fbf432 00:23:27.045 18:25:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7550dae3-4ee8-4974-8dae-0b11b2fbf432 00:23:27.045 18:25:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:27.045 18:25:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:27.045 18:25:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:27.045 18:25:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7550dae3-4ee8-4974-8dae-0b11b2fbf432 00:23:27.303 18:25:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:27.303 { 00:23:27.303 "name": "7550dae3-4ee8-4974-8dae-0b11b2fbf432", 00:23:27.303 "aliases": [ 00:23:27.303 "lvs/nvme0n1p0" 00:23:27.303 ], 00:23:27.303 "product_name": "Logical Volume", 00:23:27.303 "block_size": 4096, 00:23:27.303 "num_blocks": 26476544, 00:23:27.303 "uuid": "7550dae3-4ee8-4974-8dae-0b11b2fbf432", 00:23:27.304 "assigned_rate_limits": { 00:23:27.304 "rw_ios_per_sec": 0, 00:23:27.304 "rw_mbytes_per_sec": 0, 00:23:27.304 "r_mbytes_per_sec": 0, 00:23:27.304 "w_mbytes_per_sec": 0 00:23:27.304 }, 00:23:27.304 "claimed": false, 00:23:27.304 "zoned": false, 00:23:27.304 "supported_io_types": { 00:23:27.304 "read": true, 00:23:27.304 "write": true, 00:23:27.304 "unmap": true, 00:23:27.304 "flush": false, 00:23:27.304 "reset": true, 00:23:27.304 "nvme_admin": false, 00:23:27.304 "nvme_io": false, 00:23:27.304 "nvme_io_md": false, 00:23:27.304 "write_zeroes": true, 00:23:27.304 "zcopy": false, 00:23:27.304 "get_zone_info": false, 00:23:27.304 "zone_management": false, 00:23:27.304 "zone_append": false, 00:23:27.304 "compare": false, 00:23:27.304 "compare_and_write": false, 00:23:27.304 "abort": false, 00:23:27.304 "seek_hole": true, 00:23:27.304 "seek_data": true, 00:23:27.304 "copy": false, 00:23:27.304 "nvme_iov_md": false 00:23:27.304 }, 00:23:27.304 "driver_specific": { 00:23:27.304 "lvol": { 00:23:27.304 "lvol_store_uuid": "3004a13b-69f0-4cea-b22f-38e0a33c9c89", 00:23:27.304 "base_bdev": "nvme0n1", 00:23:27.304 "thin_provision": true, 00:23:27.304 "num_allocated_clusters": 0, 00:23:27.304 "snapshot": false, 00:23:27.304 "clone": false, 00:23:27.304 "esnap_clone": false 00:23:27.304 } 00:23:27.304 } 00:23:27.304 } 00:23:27.304 ]' 00:23:27.304 18:25:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:27.304 18:25:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:27.304 18:25:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:27.304 18:25:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:27.304 18:25:01 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:27.304 18:25:01 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:27.304 18:25:01 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:23:27.304 18:25:01 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:27.562 18:25:01 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:23:27.562 18:25:01 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:23:27.562 18:25:01 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 7550dae3-4ee8-4974-8dae-0b11b2fbf432 00:23:27.562 18:25:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=7550dae3-4ee8-4974-8dae-0b11b2fbf432 00:23:27.562 18:25:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:27.562 18:25:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:27.562 18:25:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:27.562 18:25:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7550dae3-4ee8-4974-8dae-0b11b2fbf432 00:23:27.820 18:25:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:27.820 { 00:23:27.820 "name": "7550dae3-4ee8-4974-8dae-0b11b2fbf432", 00:23:27.820 "aliases": [ 00:23:27.820 "lvs/nvme0n1p0" 00:23:27.820 ], 00:23:27.820 "product_name": "Logical Volume", 00:23:27.820 "block_size": 4096, 00:23:27.820 "num_blocks": 26476544, 00:23:27.820 "uuid": "7550dae3-4ee8-4974-8dae-0b11b2fbf432", 00:23:27.820 "assigned_rate_limits": { 00:23:27.820 "rw_ios_per_sec": 0, 00:23:27.820 "rw_mbytes_per_sec": 0, 00:23:27.820 "r_mbytes_per_sec": 0, 00:23:27.820 "w_mbytes_per_sec": 0 00:23:27.820 }, 00:23:27.820 "claimed": false, 00:23:27.820 "zoned": false, 00:23:27.820 "supported_io_types": { 00:23:27.820 "read": true, 00:23:27.820 "write": true, 00:23:27.820 "unmap": true, 00:23:27.820 "flush": false, 00:23:27.820 "reset": true, 00:23:27.820 "nvme_admin": false, 00:23:27.820 "nvme_io": false, 00:23:27.820 "nvme_io_md": false, 00:23:27.820 "write_zeroes": true, 00:23:27.820 "zcopy": false, 00:23:27.820 "get_zone_info": false, 00:23:27.820 "zone_management": false, 00:23:27.820 "zone_append": false, 00:23:27.820 "compare": false, 00:23:27.820 "compare_and_write": false, 00:23:27.820 "abort": false, 00:23:27.820 "seek_hole": true, 00:23:27.820 "seek_data": true, 00:23:27.820 "copy": false, 00:23:27.820 "nvme_iov_md": false 00:23:27.820 }, 00:23:27.820 "driver_specific": { 00:23:27.820 "lvol": { 00:23:27.820 "lvol_store_uuid": "3004a13b-69f0-4cea-b22f-38e0a33c9c89", 00:23:27.820 "base_bdev": "nvme0n1", 00:23:27.820 "thin_provision": true, 00:23:27.820 "num_allocated_clusters": 0, 00:23:27.820 "snapshot": false, 00:23:27.820 "clone": false, 00:23:27.820 "esnap_clone": false 00:23:27.820 } 00:23:27.820 } 00:23:27.820 } 00:23:27.820 ]' 00:23:27.820 18:25:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:27.820 18:25:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:27.820 18:25:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:28.079 18:25:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:28.079 18:25:02 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:28.079 18:25:02 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:28.079 18:25:02 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:23:28.079 18:25:02 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7550dae3-4ee8-4974-8dae-0b11b2fbf432 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:23:28.079 [2024-11-26 18:25:02.530985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.079 [2024-11-26 18:25:02.531250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:28.079 [2024-11-26 18:25:02.531304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:28.079 [2024-11-26 18:25:02.531322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.079 [2024-11-26 18:25:02.535355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.079 [2024-11-26 18:25:02.535401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:28.079 [2024-11-26 18:25:02.535437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.988 ms 00:23:28.079 [2024-11-26 18:25:02.535449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.079 [2024-11-26 18:25:02.535731] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:28.079 [2024-11-26 18:25:02.536830] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:28.079 [2024-11-26 18:25:02.536881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.079 [2024-11-26 18:25:02.536897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:28.079 [2024-11-26 18:25:02.536922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.163 ms 00:23:28.079 [2024-11-26 18:25:02.536933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.079 [2024-11-26 18:25:02.537174] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2e417d21-16e8-4060-8f5a-5ce9752d454b 00:23:28.339 [2024-11-26 18:25:02.539360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.339 [2024-11-26 18:25:02.539422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:28.339 [2024-11-26 18:25:02.539440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:28.339 [2024-11-26 18:25:02.539454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.339 [2024-11-26 18:25:02.550236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.339 [2024-11-26 18:25:02.550310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:28.339 [2024-11-26 18:25:02.550330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.622 ms 00:23:28.339 [2024-11-26 18:25:02.550344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.339 [2024-11-26 18:25:02.550638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.339 [2024-11-26 18:25:02.550669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:28.339 [2024-11-26 18:25:02.550684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:23:28.339 [2024-11-26 18:25:02.550705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.339 [2024-11-26 18:25:02.550766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.339 [2024-11-26 18:25:02.550786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:28.339 [2024-11-26 18:25:02.550802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:28.339 [2024-11-26 18:25:02.550828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.339 [2024-11-26 18:25:02.550895] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:28.339 [2024-11-26 18:25:02.556068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.339 [2024-11-26 18:25:02.556110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:28.339 [2024-11-26 18:25:02.556148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.179 ms 00:23:28.339 [2024-11-26 18:25:02.556159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.339 [2024-11-26 18:25:02.556250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.339 [2024-11-26 18:25:02.556290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:28.339 [2024-11-26 18:25:02.556307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:28.339 [2024-11-26 18:25:02.556318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.339 [2024-11-26 18:25:02.556374] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:28.339 [2024-11-26 18:25:02.556521] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:28.339 [2024-11-26 18:25:02.556545] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:28.339 [2024-11-26 18:25:02.556601] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:28.339 [2024-11-26 18:25:02.556622] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:28.339 [2024-11-26 18:25:02.556635] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:28.339 [2024-11-26 18:25:02.556664] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:28.339 [2024-11-26 18:25:02.556678] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:28.339 [2024-11-26 18:25:02.556708] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:28.339 [2024-11-26 18:25:02.556720] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:28.339 [2024-11-26 18:25:02.556735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.339 [2024-11-26 18:25:02.556747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:28.339 [2024-11-26 18:25:02.556762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:23:28.339 [2024-11-26 18:25:02.556773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.339 [2024-11-26 18:25:02.556898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.339 [2024-11-26 18:25:02.556913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:28.339 [2024-11-26 18:25:02.556930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:28.339 [2024-11-26 18:25:02.556968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.339 [2024-11-26 18:25:02.557160] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:28.339 [2024-11-26 18:25:02.557189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:28.339 [2024-11-26 18:25:02.557211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:28.339 [2024-11-26 18:25:02.557224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.339 [2024-11-26 18:25:02.557243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:28.339 [2024-11-26 18:25:02.557264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:28.339 [2024-11-26 18:25:02.557288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:28.339 [2024-11-26 18:25:02.557308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:28.339 [2024-11-26 18:25:02.557323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:28.339 [2024-11-26 18:25:02.557334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:28.339 [2024-11-26 18:25:02.557347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:28.339 [2024-11-26 18:25:02.557357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:28.339 [2024-11-26 18:25:02.557370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:28.339 [2024-11-26 18:25:02.557380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:28.339 [2024-11-26 18:25:02.557409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:28.339 [2024-11-26 18:25:02.557426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.339 [2024-11-26 18:25:02.557449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:28.339 [2024-11-26 18:25:02.557461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:28.339 [2024-11-26 18:25:02.557474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.339 [2024-11-26 18:25:02.557486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:28.339 [2024-11-26 18:25:02.557502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:28.339 [2024-11-26 18:25:02.557513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.339 [2024-11-26 18:25:02.557525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:28.339 [2024-11-26 18:25:02.557535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:28.339 [2024-11-26 18:25:02.557564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.339 [2024-11-26 18:25:02.557574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:28.339 [2024-11-26 18:25:02.557587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:28.339 [2024-11-26 18:25:02.557597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.339 [2024-11-26 18:25:02.557609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:28.339 [2024-11-26 18:25:02.557619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:28.339 [2024-11-26 18:25:02.557645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:28.339 [2024-11-26 18:25:02.557659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:28.339 [2024-11-26 18:25:02.557674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:28.340 [2024-11-26 18:25:02.557684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:28.340 [2024-11-26 18:25:02.557696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:28.340 [2024-11-26 18:25:02.557707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:28.340 [2024-11-26 18:25:02.557737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:28.340 [2024-11-26 18:25:02.557748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:28.340 [2024-11-26 18:25:02.557761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:28.340 [2024-11-26 18:25:02.557771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.340 [2024-11-26 18:25:02.557785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:28.340 [2024-11-26 18:25:02.557795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:28.340 [2024-11-26 18:25:02.557808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.340 [2024-11-26 18:25:02.557818] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:28.340 [2024-11-26 18:25:02.557832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:28.340 [2024-11-26 18:25:02.557842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:28.340 [2024-11-26 18:25:02.557856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:28.340 [2024-11-26 18:25:02.557868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:28.340 [2024-11-26 18:25:02.557884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:28.340 [2024-11-26 18:25:02.557895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:28.340 [2024-11-26 18:25:02.557908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:28.340 [2024-11-26 18:25:02.557919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:28.340 [2024-11-26 18:25:02.557932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:28.340 [2024-11-26 18:25:02.557947] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:28.340 [2024-11-26 18:25:02.557985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:28.340 [2024-11-26 18:25:02.557998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:28.340 [2024-11-26 18:25:02.558013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:28.340 [2024-11-26 18:25:02.558025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:28.340 [2024-11-26 18:25:02.558039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:28.340 [2024-11-26 18:25:02.558050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:28.340 [2024-11-26 18:25:02.558064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:28.340 [2024-11-26 18:25:02.558076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:28.340 [2024-11-26 18:25:02.558090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:28.340 [2024-11-26 18:25:02.558102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:28.340 [2024-11-26 18:25:02.558118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:28.340 [2024-11-26 18:25:02.558129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:28.340 [2024-11-26 18:25:02.558143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:28.340 [2024-11-26 18:25:02.558155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:28.340 [2024-11-26 18:25:02.558171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:28.340 [2024-11-26 18:25:02.558183] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:28.340 [2024-11-26 18:25:02.558198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:28.340 [2024-11-26 18:25:02.558210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:28.340 [2024-11-26 18:25:02.558225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:28.340 [2024-11-26 18:25:02.558237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:28.340 [2024-11-26 18:25:02.558251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:28.340 [2024-11-26 18:25:02.558263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:28.340 [2024-11-26 18:25:02.558278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:28.340 [2024-11-26 18:25:02.558290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.181 ms 00:23:28.340 [2024-11-26 18:25:02.558304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:28.340 [2024-11-26 18:25:02.558445] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:28.340 [2024-11-26 18:25:02.558468] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:31.627 [2024-11-26 18:25:05.765785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.627 [2024-11-26 18:25:05.765895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:31.627 [2024-11-26 18:25:05.765935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3207.355 ms 00:23:31.627 [2024-11-26 18:25:05.765950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.627 [2024-11-26 18:25:05.804574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.627 [2024-11-26 18:25:05.804668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:31.627 [2024-11-26 18:25:05.804692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.302 ms 00:23:31.627 [2024-11-26 18:25:05.804707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.627 [2024-11-26 18:25:05.804910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.627 [2024-11-26 18:25:05.804936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:31.627 [2024-11-26 18:25:05.804980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:31.627 [2024-11-26 18:25:05.804998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.627 [2024-11-26 18:25:05.858652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.627 [2024-11-26 18:25:05.858731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:31.627 [2024-11-26 18:25:05.858772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.601 ms 00:23:31.627 [2024-11-26 18:25:05.858789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.627 [2024-11-26 18:25:05.858989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.627 [2024-11-26 18:25:05.859013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:31.627 [2024-11-26 18:25:05.859029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:31.628 [2024-11-26 18:25:05.859043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.628 [2024-11-26 18:25:05.859743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.628 [2024-11-26 18:25:05.859784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:31.628 [2024-11-26 18:25:05.859799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:23:31.628 [2024-11-26 18:25:05.859813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.628 [2024-11-26 18:25:05.860015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.628 [2024-11-26 18:25:05.860035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:31.628 [2024-11-26 18:25:05.860101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:23:31.628 [2024-11-26 18:25:05.860119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.628 [2024-11-26 18:25:05.881447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.628 [2024-11-26 18:25:05.881518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:31.628 [2024-11-26 18:25:05.881537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.278 ms 00:23:31.628 [2024-11-26 18:25:05.881559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.628 [2024-11-26 18:25:05.895066] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:31.628 [2024-11-26 18:25:05.918016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.628 [2024-11-26 18:25:05.918099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:31.628 [2024-11-26 18:25:05.918141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.236 ms 00:23:31.628 [2024-11-26 18:25:05.918154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.628 [2024-11-26 18:25:06.009834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.628 [2024-11-26 18:25:06.009920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:31.628 [2024-11-26 18:25:06.009968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.494 ms 00:23:31.628 [2024-11-26 18:25:06.009981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.628 [2024-11-26 18:25:06.010277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.628 [2024-11-26 18:25:06.010299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:31.628 [2024-11-26 18:25:06.010319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:23:31.628 [2024-11-26 18:25:06.010332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.628 [2024-11-26 18:25:06.038413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.628 [2024-11-26 18:25:06.038456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:31.628 [2024-11-26 18:25:06.038544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.019 ms 00:23:31.628 [2024-11-26 18:25:06.038558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.628 [2024-11-26 18:25:06.065766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.628 [2024-11-26 18:25:06.065809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:31.628 [2024-11-26 18:25:06.065847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.083 ms 00:23:31.628 [2024-11-26 18:25:06.065858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.628 [2024-11-26 18:25:06.066868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.628 [2024-11-26 18:25:06.067098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:31.628 [2024-11-26 18:25:06.067132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.870 ms 00:23:31.628 [2024-11-26 18:25:06.067147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.887 [2024-11-26 18:25:06.153797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.887 [2024-11-26 18:25:06.153862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:31.887 [2024-11-26 18:25:06.153905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.595 ms 00:23:31.887 [2024-11-26 18:25:06.153917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.887 [2024-11-26 18:25:06.184556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.887 [2024-11-26 18:25:06.184656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:31.887 [2024-11-26 18:25:06.184680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.493 ms 00:23:31.887 [2024-11-26 18:25:06.184695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.887 [2024-11-26 18:25:06.214713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.887 [2024-11-26 18:25:06.214938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:31.887 [2024-11-26 18:25:06.214973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.908 ms 00:23:31.887 [2024-11-26 18:25:06.214986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.887 [2024-11-26 18:25:06.246571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.887 [2024-11-26 18:25:06.246651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:31.887 [2024-11-26 18:25:06.246676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.446 ms 00:23:31.887 [2024-11-26 18:25:06.246690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.887 [2024-11-26 18:25:06.246815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.887 [2024-11-26 18:25:06.246838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:31.887 [2024-11-26 18:25:06.246881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:31.887 [2024-11-26 18:25:06.246893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.887 [2024-11-26 18:25:06.247049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.887 [2024-11-26 18:25:06.247066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:31.887 [2024-11-26 18:25:06.247087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:31.887 [2024-11-26 18:25:06.247099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.887 [2024-11-26 18:25:06.248443] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:31.887 [2024-11-26 18:25:06.252643] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3717.198 ms, result 0 00:23:31.887 [2024-11-26 18:25:06.253656] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:31.887 { 00:23:31.887 "name": "ftl0", 00:23:31.888 "uuid": "2e417d21-16e8-4060-8f5a-5ce9752d454b" 00:23:31.888 } 00:23:31.888 18:25:06 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:23:31.888 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:23:31.888 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:31.888 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:23:31.888 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:31.888 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:31.888 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:32.146 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:32.714 [ 00:23:32.714 { 00:23:32.714 "name": "ftl0", 00:23:32.714 "aliases": [ 00:23:32.714 "2e417d21-16e8-4060-8f5a-5ce9752d454b" 00:23:32.714 ], 00:23:32.714 "product_name": "FTL disk", 00:23:32.714 "block_size": 4096, 00:23:32.714 "num_blocks": 23592960, 00:23:32.714 "uuid": "2e417d21-16e8-4060-8f5a-5ce9752d454b", 00:23:32.714 "assigned_rate_limits": { 00:23:32.714 "rw_ios_per_sec": 0, 00:23:32.714 "rw_mbytes_per_sec": 0, 00:23:32.714 "r_mbytes_per_sec": 0, 00:23:32.714 "w_mbytes_per_sec": 0 00:23:32.714 }, 00:23:32.714 "claimed": false, 00:23:32.714 "zoned": false, 00:23:32.714 "supported_io_types": { 00:23:32.714 "read": true, 00:23:32.714 "write": true, 00:23:32.714 "unmap": true, 00:23:32.714 "flush": true, 00:23:32.714 "reset": false, 00:23:32.714 "nvme_admin": false, 00:23:32.714 "nvme_io": false, 00:23:32.714 "nvme_io_md": false, 00:23:32.714 "write_zeroes": true, 00:23:32.714 "zcopy": false, 00:23:32.714 "get_zone_info": false, 00:23:32.714 "zone_management": false, 00:23:32.714 "zone_append": false, 00:23:32.714 "compare": false, 00:23:32.714 "compare_and_write": false, 00:23:32.714 "abort": false, 00:23:32.714 "seek_hole": false, 00:23:32.714 "seek_data": false, 00:23:32.714 "copy": false, 00:23:32.714 "nvme_iov_md": false 00:23:32.714 }, 00:23:32.714 "driver_specific": { 00:23:32.714 "ftl": { 00:23:32.714 "base_bdev": "7550dae3-4ee8-4974-8dae-0b11b2fbf432", 00:23:32.714 "cache": "nvc0n1p0" 00:23:32.714 } 00:23:32.714 } 00:23:32.714 } 00:23:32.714 ] 00:23:32.714 18:25:06 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:23:32.714 18:25:06 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:23:32.714 18:25:06 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:32.714 18:25:07 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:23:32.714 18:25:07 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:23:32.973 18:25:07 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:23:32.973 { 00:23:32.973 "name": "ftl0", 00:23:32.973 "aliases": [ 00:23:32.973 "2e417d21-16e8-4060-8f5a-5ce9752d454b" 00:23:32.973 ], 00:23:32.973 "product_name": "FTL disk", 00:23:32.973 "block_size": 4096, 00:23:32.973 "num_blocks": 23592960, 00:23:32.973 "uuid": "2e417d21-16e8-4060-8f5a-5ce9752d454b", 00:23:32.973 "assigned_rate_limits": { 00:23:32.973 "rw_ios_per_sec": 0, 00:23:32.973 "rw_mbytes_per_sec": 0, 00:23:32.973 "r_mbytes_per_sec": 0, 00:23:32.973 "w_mbytes_per_sec": 0 00:23:32.973 }, 00:23:32.973 "claimed": false, 00:23:32.973 "zoned": false, 00:23:32.973 "supported_io_types": { 00:23:32.973 "read": true, 00:23:32.973 "write": true, 00:23:32.973 "unmap": true, 00:23:32.973 "flush": true, 00:23:32.973 "reset": false, 00:23:32.973 "nvme_admin": false, 00:23:32.973 "nvme_io": false, 00:23:32.973 "nvme_io_md": false, 00:23:32.973 "write_zeroes": true, 00:23:32.974 "zcopy": false, 00:23:32.974 "get_zone_info": false, 00:23:32.974 "zone_management": false, 00:23:32.974 "zone_append": false, 00:23:32.974 "compare": false, 00:23:32.974 "compare_and_write": false, 00:23:32.974 "abort": false, 00:23:32.974 "seek_hole": false, 00:23:32.974 "seek_data": false, 00:23:32.974 "copy": false, 00:23:32.974 "nvme_iov_md": false 00:23:32.974 }, 00:23:32.974 "driver_specific": { 00:23:32.974 "ftl": { 00:23:32.974 "base_bdev": "7550dae3-4ee8-4974-8dae-0b11b2fbf432", 00:23:32.974 "cache": "nvc0n1p0" 00:23:32.974 } 00:23:32.974 } 00:23:32.974 } 00:23:32.974 ]' 00:23:32.974 18:25:07 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:23:32.974 18:25:07 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:23:32.974 18:25:07 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:33.233 [2024-11-26 18:25:07.680131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.233 [2024-11-26 18:25:07.680228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:33.233 [2024-11-26 18:25:07.680252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:33.233 [2024-11-26 18:25:07.680267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.233 [2024-11-26 18:25:07.680324] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:33.233 [2024-11-26 18:25:07.683911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.233 [2024-11-26 18:25:07.683960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:33.233 [2024-11-26 18:25:07.684002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.559 ms 00:23:33.233 [2024-11-26 18:25:07.684013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.233 [2024-11-26 18:25:07.684834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.233 [2024-11-26 18:25:07.684861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:33.233 [2024-11-26 18:25:07.684878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:23:33.233 [2024-11-26 18:25:07.684893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.233 [2024-11-26 18:25:07.688296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.233 [2024-11-26 18:25:07.688329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:33.233 [2024-11-26 18:25:07.688365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.352 ms 00:23:33.233 [2024-11-26 18:25:07.688377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.498 [2024-11-26 18:25:07.695811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.498 [2024-11-26 18:25:07.695851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:33.498 [2024-11-26 18:25:07.695886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.342 ms 00:23:33.498 [2024-11-26 18:25:07.695898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.498 [2024-11-26 18:25:07.725917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.498 [2024-11-26 18:25:07.725995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:33.498 [2024-11-26 18:25:07.726041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.902 ms 00:23:33.498 [2024-11-26 18:25:07.726053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.498 [2024-11-26 18:25:07.745298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.498 [2024-11-26 18:25:07.745645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:33.498 [2024-11-26 18:25:07.745687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.092 ms 00:23:33.498 [2024-11-26 18:25:07.745704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.498 [2024-11-26 18:25:07.746073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.498 [2024-11-26 18:25:07.746097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:33.498 [2024-11-26 18:25:07.746126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:23:33.498 [2024-11-26 18:25:07.746139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.498 [2024-11-26 18:25:07.776015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.498 [2024-11-26 18:25:07.776101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:33.498 [2024-11-26 18:25:07.776143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.812 ms 00:23:33.498 [2024-11-26 18:25:07.776155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.498 [2024-11-26 18:25:07.805296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.498 [2024-11-26 18:25:07.805636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:33.498 [2024-11-26 18:25:07.805681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.973 ms 00:23:33.498 [2024-11-26 18:25:07.805695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.498 [2024-11-26 18:25:07.836416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.498 [2024-11-26 18:25:07.836460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:33.498 [2024-11-26 18:25:07.836498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.587 ms 00:23:33.498 [2024-11-26 18:25:07.836510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.498 [2024-11-26 18:25:07.864695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.498 [2024-11-26 18:25:07.864737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:33.498 [2024-11-26 18:25:07.864775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.977 ms 00:23:33.498 [2024-11-26 18:25:07.864787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.498 [2024-11-26 18:25:07.864899] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:33.498 [2024-11-26 18:25:07.864942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.864959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:33.498 [2024-11-26 18:25:07.865636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.865993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:33.499 [2024-11-26 18:25:07.866668] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:33.499 [2024-11-26 18:25:07.866686] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e417d21-16e8-4060-8f5a-5ce9752d454b 00:23:33.499 [2024-11-26 18:25:07.866699] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:33.499 [2024-11-26 18:25:07.866717] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:33.499 [2024-11-26 18:25:07.866729] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:33.499 [2024-11-26 18:25:07.866744] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:33.499 [2024-11-26 18:25:07.866756] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:33.499 [2024-11-26 18:25:07.866771] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:33.499 [2024-11-26 18:25:07.866783] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:33.499 [2024-11-26 18:25:07.866797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:33.499 [2024-11-26 18:25:07.866807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:33.499 [2024-11-26 18:25:07.866843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.499 [2024-11-26 18:25:07.866866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:33.499 [2024-11-26 18:25:07.866881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.939 ms 00:23:33.499 [2024-11-26 18:25:07.866893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.499 [2024-11-26 18:25:07.884074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.499 [2024-11-26 18:25:07.884119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:33.499 [2024-11-26 18:25:07.884143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.127 ms 00:23:33.499 [2024-11-26 18:25:07.884155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.499 [2024-11-26 18:25:07.884796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.499 [2024-11-26 18:25:07.884846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:33.499 [2024-11-26 18:25:07.884866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:23:33.499 [2024-11-26 18:25:07.884878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.499 [2024-11-26 18:25:07.941589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.499 [2024-11-26 18:25:07.941693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:33.499 [2024-11-26 18:25:07.941735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.499 [2024-11-26 18:25:07.941749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.499 [2024-11-26 18:25:07.941965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.499 [2024-11-26 18:25:07.941984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:33.499 [2024-11-26 18:25:07.942000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.499 [2024-11-26 18:25:07.942011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.499 [2024-11-26 18:25:07.942127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.499 [2024-11-26 18:25:07.942148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:33.499 [2024-11-26 18:25:07.942167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.499 [2024-11-26 18:25:07.942179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.499 [2024-11-26 18:25:07.942231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.499 [2024-11-26 18:25:07.942245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:33.499 [2024-11-26 18:25:07.942259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.499 [2024-11-26 18:25:07.942270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.767 [2024-11-26 18:25:08.050320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.767 [2024-11-26 18:25:08.050410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:33.767 [2024-11-26 18:25:08.050451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.767 [2024-11-26 18:25:08.050464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.767 [2024-11-26 18:25:08.129710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.767 [2024-11-26 18:25:08.129796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:33.767 [2024-11-26 18:25:08.129835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.767 [2024-11-26 18:25:08.129848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.767 [2024-11-26 18:25:08.130014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.767 [2024-11-26 18:25:08.130037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:33.767 [2024-11-26 18:25:08.130056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.767 [2024-11-26 18:25:08.130067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.767 [2024-11-26 18:25:08.130151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.767 [2024-11-26 18:25:08.130165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:33.767 [2024-11-26 18:25:08.130179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.767 [2024-11-26 18:25:08.130189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.767 [2024-11-26 18:25:08.130357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.767 [2024-11-26 18:25:08.130378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:33.767 [2024-11-26 18:25:08.130397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.767 [2024-11-26 18:25:08.130408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.768 [2024-11-26 18:25:08.130521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.768 [2024-11-26 18:25:08.130547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:33.768 [2024-11-26 18:25:08.130561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.768 [2024-11-26 18:25:08.130602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.768 [2024-11-26 18:25:08.130728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.768 [2024-11-26 18:25:08.130745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:33.768 [2024-11-26 18:25:08.130776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.768 [2024-11-26 18:25:08.130789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.768 [2024-11-26 18:25:08.130889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.768 [2024-11-26 18:25:08.130905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:33.768 [2024-11-26 18:25:08.130920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.768 [2024-11-26 18:25:08.130931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.768 [2024-11-26 18:25:08.131233] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 451.086 ms, result 0 00:23:33.768 true 00:23:33.768 18:25:08 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78353 00:23:33.768 18:25:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78353 ']' 00:23:33.768 18:25:08 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78353 00:23:33.768 18:25:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:33.768 18:25:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.768 18:25:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78353 00:23:33.768 killing process with pid 78353 00:23:33.768 18:25:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:33.768 18:25:08 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:33.768 18:25:08 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78353' 00:23:33.768 18:25:08 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78353 00:23:33.768 18:25:08 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78353 00:23:39.065 18:25:12 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:23:39.633 65536+0 records in 00:23:39.633 65536+0 records out 00:23:39.633 268435456 bytes (268 MB, 256 MiB) copied, 1.11296 s, 241 MB/s 00:23:39.633 18:25:13 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:39.633 [2024-11-26 18:25:13.978528] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:23:39.633 [2024-11-26 18:25:13.978711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78563 ] 00:23:39.891 [2024-11-26 18:25:14.158200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.891 [2024-11-26 18:25:14.307808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.458 [2024-11-26 18:25:14.656450] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.458 [2024-11-26 18:25:14.656817] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.458 [2024-11-26 18:25:14.822833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.458 [2024-11-26 18:25:14.822898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:40.458 [2024-11-26 18:25:14.822934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:40.458 [2024-11-26 18:25:14.822946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.458 [2024-11-26 18:25:14.826351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.458 [2024-11-26 18:25:14.826574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:40.458 [2024-11-26 18:25:14.826605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.378 ms 00:23:40.459 [2024-11-26 18:25:14.826618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.459 [2024-11-26 18:25:14.826793] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:40.459 [2024-11-26 18:25:14.827717] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:40.459 [2024-11-26 18:25:14.827751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.459 [2024-11-26 18:25:14.827764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:40.459 [2024-11-26 18:25:14.827776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:23:40.459 [2024-11-26 18:25:14.827787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.459 [2024-11-26 18:25:14.829831] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:40.459 [2024-11-26 18:25:14.844793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.459 [2024-11-26 18:25:14.845000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:40.459 [2024-11-26 18:25:14.845029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.964 ms 00:23:40.459 [2024-11-26 18:25:14.845041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.459 [2024-11-26 18:25:14.845160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.459 [2024-11-26 18:25:14.845182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:40.459 [2024-11-26 18:25:14.845195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:40.459 [2024-11-26 18:25:14.845205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.459 [2024-11-26 18:25:14.854058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.459 [2024-11-26 18:25:14.854102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:40.459 [2024-11-26 18:25:14.854133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.797 ms 00:23:40.459 [2024-11-26 18:25:14.854144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.459 [2024-11-26 18:25:14.854266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.459 [2024-11-26 18:25:14.854286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:40.459 [2024-11-26 18:25:14.854299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:40.459 [2024-11-26 18:25:14.854309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.459 [2024-11-26 18:25:14.854378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.459 [2024-11-26 18:25:14.854393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:40.459 [2024-11-26 18:25:14.854405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:40.459 [2024-11-26 18:25:14.854415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.459 [2024-11-26 18:25:14.854443] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:40.459 [2024-11-26 18:25:14.859175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.459 [2024-11-26 18:25:14.859212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:40.459 [2024-11-26 18:25:14.859243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.739 ms 00:23:40.459 [2024-11-26 18:25:14.859253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.459 [2024-11-26 18:25:14.859332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.459 [2024-11-26 18:25:14.859351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:40.459 [2024-11-26 18:25:14.859363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:40.459 [2024-11-26 18:25:14.859373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.459 [2024-11-26 18:25:14.859409] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:40.459 [2024-11-26 18:25:14.859435] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:40.459 [2024-11-26 18:25:14.859473] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:40.459 [2024-11-26 18:25:14.859491] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:40.459 [2024-11-26 18:25:14.859626] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:40.459 [2024-11-26 18:25:14.859647] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:40.459 [2024-11-26 18:25:14.859661] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:40.459 [2024-11-26 18:25:14.859697] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:40.459 [2024-11-26 18:25:14.859710] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:40.459 [2024-11-26 18:25:14.859721] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:40.459 [2024-11-26 18:25:14.859731] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:40.459 [2024-11-26 18:25:14.859741] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:40.459 [2024-11-26 18:25:14.859751] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:40.459 [2024-11-26 18:25:14.859763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.459 [2024-11-26 18:25:14.859779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:40.459 [2024-11-26 18:25:14.859790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:23:40.459 [2024-11-26 18:25:14.859800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.459 [2024-11-26 18:25:14.859893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.459 [2024-11-26 18:25:14.859914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:40.459 [2024-11-26 18:25:14.859941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:40.459 [2024-11-26 18:25:14.859968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.459 [2024-11-26 18:25:14.860095] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:40.459 [2024-11-26 18:25:14.860119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:40.459 [2024-11-26 18:25:14.860131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.459 [2024-11-26 18:25:14.860142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.459 [2024-11-26 18:25:14.860153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:40.459 [2024-11-26 18:25:14.860163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:40.459 [2024-11-26 18:25:14.860173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:40.459 [2024-11-26 18:25:14.860183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:40.459 [2024-11-26 18:25:14.860193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:40.459 [2024-11-26 18:25:14.860203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.459 [2024-11-26 18:25:14.860213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:40.459 [2024-11-26 18:25:14.860240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:40.459 [2024-11-26 18:25:14.860250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.459 [2024-11-26 18:25:14.860260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:40.459 [2024-11-26 18:25:14.860271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:40.459 [2024-11-26 18:25:14.860280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.459 [2024-11-26 18:25:14.860290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:40.459 [2024-11-26 18:25:14.860300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:40.459 [2024-11-26 18:25:14.860309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.459 [2024-11-26 18:25:14.860318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:40.459 [2024-11-26 18:25:14.860328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:40.459 [2024-11-26 18:25:14.860338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.459 [2024-11-26 18:25:14.860348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:40.459 [2024-11-26 18:25:14.860357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:40.459 [2024-11-26 18:25:14.860367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.459 [2024-11-26 18:25:14.860376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:40.459 [2024-11-26 18:25:14.860401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:40.459 [2024-11-26 18:25:14.860410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.459 [2024-11-26 18:25:14.860419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:40.459 [2024-11-26 18:25:14.860428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:40.459 [2024-11-26 18:25:14.860437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.459 [2024-11-26 18:25:14.860446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:40.459 [2024-11-26 18:25:14.860456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:40.459 [2024-11-26 18:25:14.860465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.459 [2024-11-26 18:25:14.860474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:40.459 [2024-11-26 18:25:14.860484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:40.459 [2024-11-26 18:25:14.860493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.459 [2024-11-26 18:25:14.860504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:40.459 [2024-11-26 18:25:14.860514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:40.459 [2024-11-26 18:25:14.860523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.459 [2024-11-26 18:25:14.860533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:40.459 [2024-11-26 18:25:14.860543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:40.459 [2024-11-26 18:25:14.860552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.459 [2024-11-26 18:25:14.860563] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:40.460 [2024-11-26 18:25:14.860575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:40.460 [2024-11-26 18:25:14.860590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.460 [2024-11-26 18:25:14.860605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.460 [2024-11-26 18:25:14.860631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:40.460 [2024-11-26 18:25:14.860642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:40.460 [2024-11-26 18:25:14.860652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:40.460 [2024-11-26 18:25:14.860678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:40.460 [2024-11-26 18:25:14.860689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:40.460 [2024-11-26 18:25:14.860700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:40.460 [2024-11-26 18:25:14.860712] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:40.460 [2024-11-26 18:25:14.860725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.460 [2024-11-26 18:25:14.860737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:40.460 [2024-11-26 18:25:14.860748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:40.460 [2024-11-26 18:25:14.860758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:40.460 [2024-11-26 18:25:14.860768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:40.460 [2024-11-26 18:25:14.860778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:40.460 [2024-11-26 18:25:14.860788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:40.460 [2024-11-26 18:25:14.860798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:40.460 [2024-11-26 18:25:14.860808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:40.460 [2024-11-26 18:25:14.860818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:40.460 [2024-11-26 18:25:14.860828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:40.460 [2024-11-26 18:25:14.860838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:40.460 [2024-11-26 18:25:14.860848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:40.460 [2024-11-26 18:25:14.860859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:40.460 [2024-11-26 18:25:14.860870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:40.460 [2024-11-26 18:25:14.860880] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:40.460 [2024-11-26 18:25:14.860892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.460 [2024-11-26 18:25:14.860903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:40.460 [2024-11-26 18:25:14.860914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:40.460 [2024-11-26 18:25:14.860924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:40.460 [2024-11-26 18:25:14.860935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:40.460 [2024-11-26 18:25:14.860947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.460 [2024-11-26 18:25:14.860964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:40.460 [2024-11-26 18:25:14.860975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.916 ms 00:23:40.460 [2024-11-26 18:25:14.860985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.460 [2024-11-26 18:25:14.898779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.460 [2024-11-26 18:25:14.898892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:40.460 [2024-11-26 18:25:14.898929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.717 ms 00:23:40.460 [2024-11-26 18:25:14.898941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.460 [2024-11-26 18:25:14.899128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.460 [2024-11-26 18:25:14.899148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:40.460 [2024-11-26 18:25:14.899161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:40.460 [2024-11-26 18:25:14.899172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.720 [2024-11-26 18:25:14.946638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.720 [2024-11-26 18:25:14.946708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.720 [2024-11-26 18:25:14.946749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.436 ms 00:23:40.720 [2024-11-26 18:25:14.946761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.720 [2024-11-26 18:25:14.946931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.720 [2024-11-26 18:25:14.946951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.720 [2024-11-26 18:25:14.946964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:40.720 [2024-11-26 18:25:14.946975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.720 [2024-11-26 18:25:14.947536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.720 [2024-11-26 18:25:14.947570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.720 [2024-11-26 18:25:14.947594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:23:40.720 [2024-11-26 18:25:14.947604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.720 [2024-11-26 18:25:14.947800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.720 [2024-11-26 18:25:14.947818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.720 [2024-11-26 18:25:14.947830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:23:40.720 [2024-11-26 18:25:14.947841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.720 [2024-11-26 18:25:14.966777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.720 [2024-11-26 18:25:14.966837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.720 [2024-11-26 18:25:14.966871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.906 ms 00:23:40.720 [2024-11-26 18:25:14.966882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.720 [2024-11-26 18:25:14.982125] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:40.720 [2024-11-26 18:25:14.982187] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:40.720 [2024-11-26 18:25:14.982204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.720 [2024-11-26 18:25:14.982215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:40.720 [2024-11-26 18:25:14.982227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.176 ms 00:23:40.720 [2024-11-26 18:25:14.982238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.720 [2024-11-26 18:25:15.008641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.720 [2024-11-26 18:25:15.008697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:40.720 [2024-11-26 18:25:15.008713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.313 ms 00:23:40.720 [2024-11-26 18:25:15.008725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.720 [2024-11-26 18:25:15.022659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.720 [2024-11-26 18:25:15.022713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:40.720 [2024-11-26 18:25:15.022729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.841 ms 00:23:40.720 [2024-11-26 18:25:15.022740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.720 [2024-11-26 18:25:15.036160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.720 [2024-11-26 18:25:15.036213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:40.720 [2024-11-26 18:25:15.036227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.299 ms 00:23:40.720 [2024-11-26 18:25:15.036237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.720 [2024-11-26 18:25:15.037097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.720 [2024-11-26 18:25:15.037125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:40.720 [2024-11-26 18:25:15.037139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:23:40.720 [2024-11-26 18:25:15.037150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.720 [2024-11-26 18:25:15.108675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.720 [2024-11-26 18:25:15.108772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:40.720 [2024-11-26 18:25:15.108793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.490 ms 00:23:40.720 [2024-11-26 18:25:15.108805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.720 [2024-11-26 18:25:15.119952] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:40.720 [2024-11-26 18:25:15.139381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.720 [2024-11-26 18:25:15.139464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:40.720 [2024-11-26 18:25:15.139483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.434 ms 00:23:40.720 [2024-11-26 18:25:15.139503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.720 [2024-11-26 18:25:15.139669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.720 [2024-11-26 18:25:15.139690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:40.720 [2024-11-26 18:25:15.139703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:40.720 [2024-11-26 18:25:15.139715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.720 [2024-11-26 18:25:15.139791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.720 [2024-11-26 18:25:15.139808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:40.720 [2024-11-26 18:25:15.139819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:40.720 [2024-11-26 18:25:15.139837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.720 [2024-11-26 18:25:15.139886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.720 [2024-11-26 18:25:15.139903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:40.720 [2024-11-26 18:25:15.139915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:40.720 [2024-11-26 18:25:15.139925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.720 [2024-11-26 18:25:15.139990] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:40.720 [2024-11-26 18:25:15.140007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.720 [2024-11-26 18:25:15.140018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:40.720 [2024-11-26 18:25:15.140029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:40.720 [2024-11-26 18:25:15.140040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.720 [2024-11-26 18:25:15.168009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.721 [2024-11-26 18:25:15.168065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:40.721 [2024-11-26 18:25:15.168081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.943 ms 00:23:40.721 [2024-11-26 18:25:15.168092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.721 [2024-11-26 18:25:15.168222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.721 [2024-11-26 18:25:15.168242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:40.721 [2024-11-26 18:25:15.168255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:23:40.721 [2024-11-26 18:25:15.168265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.721 [2024-11-26 18:25:15.169776] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:40.721 [2024-11-26 18:25:15.173518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 346.469 ms, result 0 00:23:40.721 [2024-11-26 18:25:15.174393] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:40.979 [2024-11-26 18:25:15.189435] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:41.917  [2024-11-26T18:25:17.315Z] Copying: 20/256 [MB] (20 MBps) [2024-11-26T18:25:18.251Z] Copying: 41/256 [MB] (20 MBps) [2024-11-26T18:25:19.628Z] Copying: 62/256 [MB] (20 MBps) [2024-11-26T18:25:20.563Z] Copying: 83/256 [MB] (20 MBps) [2024-11-26T18:25:21.498Z] Copying: 103/256 [MB] (20 MBps) [2024-11-26T18:25:22.432Z] Copying: 125/256 [MB] (21 MBps) [2024-11-26T18:25:23.367Z] Copying: 146/256 [MB] (21 MBps) [2024-11-26T18:25:24.302Z] Copying: 167/256 [MB] (21 MBps) [2024-11-26T18:25:25.237Z] Copying: 188/256 [MB] (20 MBps) [2024-11-26T18:25:26.664Z] Copying: 208/256 [MB] (20 MBps) [2024-11-26T18:25:27.238Z] Copying: 228/256 [MB] (19 MBps) [2024-11-26T18:25:27.496Z] Copying: 250/256 [MB] (21 MBps) [2024-11-26T18:25:27.496Z] Copying: 256/256 [MB] (average 20 MBps)[2024-11-26 18:25:27.471813] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:53.035 [2024-11-26 18:25:27.483158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.035 [2024-11-26 18:25:27.483356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:53.035 [2024-11-26 18:25:27.483387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:53.035 [2024-11-26 18:25:27.483408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.035 [2024-11-26 18:25:27.483445] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:53.035 [2024-11-26 18:25:27.486777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.036 [2024-11-26 18:25:27.486812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:53.036 [2024-11-26 18:25:27.486858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.311 ms 00:23:53.036 [2024-11-26 18:25:27.486867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.036 [2024-11-26 18:25:27.488805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.036 [2024-11-26 18:25:27.488845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:53.036 [2024-11-26 18:25:27.488877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.911 ms 00:23:53.036 [2024-11-26 18:25:27.488887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.295 [2024-11-26 18:25:27.495564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.296 [2024-11-26 18:25:27.495618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:53.296 [2024-11-26 18:25:27.495649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.655 ms 00:23:53.296 [2024-11-26 18:25:27.495659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.296 [2024-11-26 18:25:27.501674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.296 [2024-11-26 18:25:27.501707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:53.296 [2024-11-26 18:25:27.501735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.958 ms 00:23:53.296 [2024-11-26 18:25:27.501746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.296 [2024-11-26 18:25:27.527260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.296 [2024-11-26 18:25:27.527463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:53.296 [2024-11-26 18:25:27.527491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.463 ms 00:23:53.296 [2024-11-26 18:25:27.527503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.296 [2024-11-26 18:25:27.543380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.296 [2024-11-26 18:25:27.543422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:53.296 [2024-11-26 18:25:27.543462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.775 ms 00:23:53.296 [2024-11-26 18:25:27.543473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.296 [2024-11-26 18:25:27.543676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.296 [2024-11-26 18:25:27.543714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:53.296 [2024-11-26 18:25:27.543727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:23:53.296 [2024-11-26 18:25:27.543754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.296 [2024-11-26 18:25:27.570378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.296 [2024-11-26 18:25:27.570417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:53.296 [2024-11-26 18:25:27.570448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.601 ms 00:23:53.296 [2024-11-26 18:25:27.570457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.296 [2024-11-26 18:25:27.597416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.296 [2024-11-26 18:25:27.597456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:53.296 [2024-11-26 18:25:27.597486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.874 ms 00:23:53.296 [2024-11-26 18:25:27.597496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.296 [2024-11-26 18:25:27.622896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.296 [2024-11-26 18:25:27.623096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:53.296 [2024-11-26 18:25:27.623121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.342 ms 00:23:53.296 [2024-11-26 18:25:27.623133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.296 [2024-11-26 18:25:27.648625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.296 [2024-11-26 18:25:27.648664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:53.296 [2024-11-26 18:25:27.648695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.401 ms 00:23:53.296 [2024-11-26 18:25:27.648704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.296 [2024-11-26 18:25:27.648747] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:53.296 [2024-11-26 18:25:27.648768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.648991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:53.296 [2024-11-26 18:25:27.649349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:53.297 [2024-11-26 18:25:27.649860] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:53.297 [2024-11-26 18:25:27.649871] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e417d21-16e8-4060-8f5a-5ce9752d454b 00:23:53.297 [2024-11-26 18:25:27.649882] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:53.297 [2024-11-26 18:25:27.649892] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:53.297 [2024-11-26 18:25:27.649903] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:53.297 [2024-11-26 18:25:27.649912] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:53.297 [2024-11-26 18:25:27.649922] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:53.297 [2024-11-26 18:25:27.649932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:53.297 [2024-11-26 18:25:27.649946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:53.297 [2024-11-26 18:25:27.649955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:53.297 [2024-11-26 18:25:27.649964] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:53.297 [2024-11-26 18:25:27.649973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.297 [2024-11-26 18:25:27.649984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:53.297 [2024-11-26 18:25:27.650009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:23:53.297 [2024-11-26 18:25:27.650019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.297 [2024-11-26 18:25:27.665034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.297 [2024-11-26 18:25:27.665072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:53.297 [2024-11-26 18:25:27.665103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.944 ms 00:23:53.297 [2024-11-26 18:25:27.665112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.297 [2024-11-26 18:25:27.665561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.297 [2024-11-26 18:25:27.665624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:53.297 [2024-11-26 18:25:27.665653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:23:53.297 [2024-11-26 18:25:27.665679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.297 [2024-11-26 18:25:27.706165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.297 [2024-11-26 18:25:27.706209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:53.297 [2024-11-26 18:25:27.706240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.297 [2024-11-26 18:25:27.706264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.297 [2024-11-26 18:25:27.706357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.297 [2024-11-26 18:25:27.706374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:53.297 [2024-11-26 18:25:27.706385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.297 [2024-11-26 18:25:27.706395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.297 [2024-11-26 18:25:27.706457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.297 [2024-11-26 18:25:27.706475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:53.297 [2024-11-26 18:25:27.706486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.297 [2024-11-26 18:25:27.706495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.297 [2024-11-26 18:25:27.706601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.297 [2024-11-26 18:25:27.706619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:53.297 [2024-11-26 18:25:27.706631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.297 [2024-11-26 18:25:27.706653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.555 [2024-11-26 18:25:27.795116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.555 [2024-11-26 18:25:27.795184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:53.555 [2024-11-26 18:25:27.795218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.555 [2024-11-26 18:25:27.795229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.555 [2024-11-26 18:25:27.867126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.555 [2024-11-26 18:25:27.867182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:53.555 [2024-11-26 18:25:27.867216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.555 [2024-11-26 18:25:27.867227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.555 [2024-11-26 18:25:27.867303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.555 [2024-11-26 18:25:27.867321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:53.555 [2024-11-26 18:25:27.867332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.555 [2024-11-26 18:25:27.867343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.555 [2024-11-26 18:25:27.867378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.555 [2024-11-26 18:25:27.867399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:53.555 [2024-11-26 18:25:27.867410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.555 [2024-11-26 18:25:27.867420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.556 [2024-11-26 18:25:27.867542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.556 [2024-11-26 18:25:27.867561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:53.556 [2024-11-26 18:25:27.867633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.556 [2024-11-26 18:25:27.867648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.556 [2024-11-26 18:25:27.867702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.556 [2024-11-26 18:25:27.867736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:53.556 [2024-11-26 18:25:27.867755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.556 [2024-11-26 18:25:27.867766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.556 [2024-11-26 18:25:27.867818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.556 [2024-11-26 18:25:27.867834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:53.556 [2024-11-26 18:25:27.867845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.556 [2024-11-26 18:25:27.867856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.556 [2024-11-26 18:25:27.867915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.556 [2024-11-26 18:25:27.867937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:53.556 [2024-11-26 18:25:27.867949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.556 [2024-11-26 18:25:27.867975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.556 [2024-11-26 18:25:27.868200] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 385.021 ms, result 0 00:23:54.491 00:23:54.491 00:23:54.749 18:25:28 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78710 00:23:54.749 18:25:28 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:54.749 18:25:28 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78710 00:23:54.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.749 18:25:28 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78710 ']' 00:23:54.749 18:25:28 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.749 18:25:28 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.749 18:25:28 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.749 18:25:28 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.749 18:25:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:54.749 [2024-11-26 18:25:29.089828] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:23:54.749 [2024-11-26 18:25:29.090397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78710 ] 00:23:55.006 [2024-11-26 18:25:29.267424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.006 [2024-11-26 18:25:29.389382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.940 18:25:30 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.940 18:25:30 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:55.940 18:25:30 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:56.199 [2024-11-26 18:25:30.486058] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:56.199 [2024-11-26 18:25:30.486171] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:56.459 [2024-11-26 18:25:30.675380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.459 [2024-11-26 18:25:30.675462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:56.459 [2024-11-26 18:25:30.675521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:56.459 [2024-11-26 18:25:30.675536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.459 [2024-11-26 18:25:30.679784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.459 [2024-11-26 18:25:30.679851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:56.459 [2024-11-26 18:25:30.679885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.185 ms 00:23:56.459 [2024-11-26 18:25:30.679897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.459 [2024-11-26 18:25:30.680070] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:56.459 [2024-11-26 18:25:30.681014] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:56.459 [2024-11-26 18:25:30.681067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.459 [2024-11-26 18:25:30.681082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:56.459 [2024-11-26 18:25:30.681096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.013 ms 00:23:56.459 [2024-11-26 18:25:30.681110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.459 [2024-11-26 18:25:30.683327] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:56.459 [2024-11-26 18:25:30.699798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.459 [2024-11-26 18:25:30.699888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:56.459 [2024-11-26 18:25:30.699909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.477 ms 00:23:56.459 [2024-11-26 18:25:30.699925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.459 [2024-11-26 18:25:30.700046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.459 [2024-11-26 18:25:30.700072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:56.459 [2024-11-26 18:25:30.700086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:56.459 [2024-11-26 18:25:30.700101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.459 [2024-11-26 18:25:30.709244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.459 [2024-11-26 18:25:30.709317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:56.459 [2024-11-26 18:25:30.709334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.075 ms 00:23:56.459 [2024-11-26 18:25:30.709349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.459 [2024-11-26 18:25:30.709509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.459 [2024-11-26 18:25:30.709573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:56.459 [2024-11-26 18:25:30.709587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:23:56.459 [2024-11-26 18:25:30.709644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.459 [2024-11-26 18:25:30.709688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.459 [2024-11-26 18:25:30.709712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:56.459 [2024-11-26 18:25:30.709726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:56.459 [2024-11-26 18:25:30.709744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.459 [2024-11-26 18:25:30.709780] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:56.459 [2024-11-26 18:25:30.714948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.459 [2024-11-26 18:25:30.715023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:56.459 [2024-11-26 18:25:30.715046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.170 ms 00:23:56.459 [2024-11-26 18:25:30.715059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.459 [2024-11-26 18:25:30.715164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.459 [2024-11-26 18:25:30.715185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:56.459 [2024-11-26 18:25:30.715212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:56.459 [2024-11-26 18:25:30.715225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.459 [2024-11-26 18:25:30.715280] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:56.459 [2024-11-26 18:25:30.715313] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:56.459 [2024-11-26 18:25:30.715378] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:56.459 [2024-11-26 18:25:30.715404] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:56.459 [2024-11-26 18:25:30.715522] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:56.459 [2024-11-26 18:25:30.715550] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:56.459 [2024-11-26 18:25:30.715601] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:56.459 [2024-11-26 18:25:30.715619] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:56.459 [2024-11-26 18:25:30.715638] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:56.459 [2024-11-26 18:25:30.715651] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:56.459 [2024-11-26 18:25:30.715669] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:56.459 [2024-11-26 18:25:30.715681] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:56.459 [2024-11-26 18:25:30.715703] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:56.459 [2024-11-26 18:25:30.715717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.459 [2024-11-26 18:25:30.715734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:56.459 [2024-11-26 18:25:30.715748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.448 ms 00:23:56.459 [2024-11-26 18:25:30.715772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.459 [2024-11-26 18:25:30.715874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.459 [2024-11-26 18:25:30.715910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:56.459 [2024-11-26 18:25:30.715924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:56.459 [2024-11-26 18:25:30.715941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.459 [2024-11-26 18:25:30.716058] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:56.459 [2024-11-26 18:25:30.716084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:56.459 [2024-11-26 18:25:30.716098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:56.459 [2024-11-26 18:25:30.716116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.459 [2024-11-26 18:25:30.716129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:56.459 [2024-11-26 18:25:30.716146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:56.459 [2024-11-26 18:25:30.716158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:56.459 [2024-11-26 18:25:30.716184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:56.459 [2024-11-26 18:25:30.716197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:56.459 [2024-11-26 18:25:30.716214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:56.459 [2024-11-26 18:25:30.716226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:56.459 [2024-11-26 18:25:30.716243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:56.459 [2024-11-26 18:25:30.716254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:56.459 [2024-11-26 18:25:30.716270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:56.459 [2024-11-26 18:25:30.716282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:56.459 [2024-11-26 18:25:30.716298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.459 [2024-11-26 18:25:30.716310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:56.459 [2024-11-26 18:25:30.716326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:56.459 [2024-11-26 18:25:30.716356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.459 [2024-11-26 18:25:30.716374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:56.459 [2024-11-26 18:25:30.716395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:56.459 [2024-11-26 18:25:30.716411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.459 [2024-11-26 18:25:30.716423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:56.459 [2024-11-26 18:25:30.716445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:56.459 [2024-11-26 18:25:30.716456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.459 [2024-11-26 18:25:30.716473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:56.459 [2024-11-26 18:25:30.716485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:56.459 [2024-11-26 18:25:30.716500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.459 [2024-11-26 18:25:30.716512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:56.459 [2024-11-26 18:25:30.716529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:56.459 [2024-11-26 18:25:30.716541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.459 [2024-11-26 18:25:30.716569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:56.459 [2024-11-26 18:25:30.716584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:56.459 [2024-11-26 18:25:30.716603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:56.459 [2024-11-26 18:25:30.716616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:56.459 [2024-11-26 18:25:30.716632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:56.459 [2024-11-26 18:25:30.716644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:56.459 [2024-11-26 18:25:30.716660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:56.459 [2024-11-26 18:25:30.716672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:56.460 [2024-11-26 18:25:30.716694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.460 [2024-11-26 18:25:30.716710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:56.460 [2024-11-26 18:25:30.716727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:56.460 [2024-11-26 18:25:30.716738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.460 [2024-11-26 18:25:30.716755] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:56.460 [2024-11-26 18:25:30.716774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:56.460 [2024-11-26 18:25:30.716791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:56.460 [2024-11-26 18:25:30.716804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.460 [2024-11-26 18:25:30.716822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:56.460 [2024-11-26 18:25:30.716834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:56.460 [2024-11-26 18:25:30.716851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:56.460 [2024-11-26 18:25:30.716863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:56.460 [2024-11-26 18:25:30.716879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:56.460 [2024-11-26 18:25:30.716890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:56.460 [2024-11-26 18:25:30.716909] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:56.460 [2024-11-26 18:25:30.716925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:56.460 [2024-11-26 18:25:30.716949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:56.460 [2024-11-26 18:25:30.716962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:56.460 [2024-11-26 18:25:30.716984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:56.460 [2024-11-26 18:25:30.716997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:56.460 [2024-11-26 18:25:30.717015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:56.460 [2024-11-26 18:25:30.717028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:56.460 [2024-11-26 18:25:30.717045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:56.460 [2024-11-26 18:25:30.717058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:56.460 [2024-11-26 18:25:30.717076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:56.460 [2024-11-26 18:25:30.717088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:56.460 [2024-11-26 18:25:30.717106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:56.460 [2024-11-26 18:25:30.717118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:56.460 [2024-11-26 18:25:30.717135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:56.460 [2024-11-26 18:25:30.717148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:56.460 [2024-11-26 18:25:30.717166] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:56.460 [2024-11-26 18:25:30.717180] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:56.460 [2024-11-26 18:25:30.717219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:56.460 [2024-11-26 18:25:30.717233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:56.460 [2024-11-26 18:25:30.717251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:56.460 [2024-11-26 18:25:30.717263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:56.460 [2024-11-26 18:25:30.717282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.460 [2024-11-26 18:25:30.717295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:56.460 [2024-11-26 18:25:30.717313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.286 ms 00:23:56.460 [2024-11-26 18:25:30.717331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.460 [2024-11-26 18:25:30.759200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.460 [2024-11-26 18:25:30.759279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:56.460 [2024-11-26 18:25:30.759333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.775 ms 00:23:56.460 [2024-11-26 18:25:30.759350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.460 [2024-11-26 18:25:30.759576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.460 [2024-11-26 18:25:30.759597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:56.460 [2024-11-26 18:25:30.759626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:56.460 [2024-11-26 18:25:30.759642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.460 [2024-11-26 18:25:30.805571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.460 [2024-11-26 18:25:30.805648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:56.460 [2024-11-26 18:25:30.805686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.878 ms 00:23:56.460 [2024-11-26 18:25:30.805698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.460 [2024-11-26 18:25:30.805839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.460 [2024-11-26 18:25:30.805858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:56.460 [2024-11-26 18:25:30.805895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:56.460 [2024-11-26 18:25:30.805907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.460 [2024-11-26 18:25:30.806551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.460 [2024-11-26 18:25:30.806611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:56.460 [2024-11-26 18:25:30.806630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:23:56.460 [2024-11-26 18:25:30.806643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.460 [2024-11-26 18:25:30.806850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.460 [2024-11-26 18:25:30.806884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:56.460 [2024-11-26 18:25:30.806901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:23:56.460 [2024-11-26 18:25:30.806913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.460 [2024-11-26 18:25:30.828960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.460 [2024-11-26 18:25:30.829022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:56.460 [2024-11-26 18:25:30.829046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.009 ms 00:23:56.460 [2024-11-26 18:25:30.829058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.460 [2024-11-26 18:25:30.852024] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:56.460 [2024-11-26 18:25:30.852085] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:56.460 [2024-11-26 18:25:30.852127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.460 [2024-11-26 18:25:30.852139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:56.460 [2024-11-26 18:25:30.852154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.920 ms 00:23:56.460 [2024-11-26 18:25:30.852177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.460 [2024-11-26 18:25:30.877918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.460 [2024-11-26 18:25:30.877978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:56.460 [2024-11-26 18:25:30.878013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.640 ms 00:23:56.460 [2024-11-26 18:25:30.878025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.460 [2024-11-26 18:25:30.891803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.460 [2024-11-26 18:25:30.891861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:56.460 [2024-11-26 18:25:30.891899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.680 ms 00:23:56.460 [2024-11-26 18:25:30.891910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.460 [2024-11-26 18:25:30.905516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.460 [2024-11-26 18:25:30.905597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:56.460 [2024-11-26 18:25:30.905635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.510 ms 00:23:56.460 [2024-11-26 18:25:30.905646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.461 [2024-11-26 18:25:30.906490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.461 [2024-11-26 18:25:30.906568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:56.461 [2024-11-26 18:25:30.906596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:23:56.461 [2024-11-26 18:25:30.906609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.719 [2024-11-26 18:25:30.977595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.719 [2024-11-26 18:25:30.977701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:56.719 [2024-11-26 18:25:30.977742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.943 ms 00:23:56.719 [2024-11-26 18:25:30.977754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.719 [2024-11-26 18:25:30.988990] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:56.720 [2024-11-26 18:25:31.008554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.720 [2024-11-26 18:25:31.008676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:56.720 [2024-11-26 18:25:31.008696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.654 ms 00:23:56.720 [2024-11-26 18:25:31.008711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.720 [2024-11-26 18:25:31.008904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.720 [2024-11-26 18:25:31.008932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:56.720 [2024-11-26 18:25:31.008963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:56.720 [2024-11-26 18:25:31.008996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.720 [2024-11-26 18:25:31.009075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.720 [2024-11-26 18:25:31.009102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:56.720 [2024-11-26 18:25:31.009117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:56.720 [2024-11-26 18:25:31.009143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.720 [2024-11-26 18:25:31.009180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.720 [2024-11-26 18:25:31.009206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:56.720 [2024-11-26 18:25:31.009220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:56.720 [2024-11-26 18:25:31.009238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.720 [2024-11-26 18:25:31.009293] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:56.720 [2024-11-26 18:25:31.009323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.720 [2024-11-26 18:25:31.009345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:56.720 [2024-11-26 18:25:31.009363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:56.720 [2024-11-26 18:25:31.009382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.720 [2024-11-26 18:25:31.041591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.720 [2024-11-26 18:25:31.041651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:56.720 [2024-11-26 18:25:31.041692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.152 ms 00:23:56.720 [2024-11-26 18:25:31.041705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.720 [2024-11-26 18:25:31.041844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.720 [2024-11-26 18:25:31.041881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:56.720 [2024-11-26 18:25:31.041924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:56.720 [2024-11-26 18:25:31.041937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.720 [2024-11-26 18:25:31.043437] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:56.720 [2024-11-26 18:25:31.047590] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 367.619 ms, result 0 00:23:56.720 [2024-11-26 18:25:31.048836] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:56.720 Some configs were skipped because the RPC state that can call them passed over. 00:23:56.720 18:25:31 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:56.978 [2024-11-26 18:25:31.358508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.978 [2024-11-26 18:25:31.358673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:56.978 [2024-11-26 18:25:31.358698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.827 ms 00:23:56.978 [2024-11-26 18:25:31.358719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.978 [2024-11-26 18:25:31.358778] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.110 ms, result 0 00:23:56.978 true 00:23:56.978 18:25:31 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:57.237 [2024-11-26 18:25:31.614238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.237 [2024-11-26 18:25:31.614324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:57.237 [2024-11-26 18:25:31.614367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.287 ms 00:23:57.237 [2024-11-26 18:25:31.614379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.237 [2024-11-26 18:25:31.614468] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.511 ms, result 0 00:23:57.237 true 00:23:57.237 18:25:31 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78710 00:23:57.237 18:25:31 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78710 ']' 00:23:57.237 18:25:31 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78710 00:23:57.237 18:25:31 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:57.237 18:25:31 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:57.237 18:25:31 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78710 00:23:57.237 killing process with pid 78710 00:23:57.237 18:25:31 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:57.237 18:25:31 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:57.237 18:25:31 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78710' 00:23:57.237 18:25:31 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78710 00:23:57.237 18:25:31 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78710 00:23:58.171 [2024-11-26 18:25:32.606148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.171 [2024-11-26 18:25:32.606270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:58.171 [2024-11-26 18:25:32.606291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:58.171 [2024-11-26 18:25:32.606304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.171 [2024-11-26 18:25:32.606338] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:58.171 [2024-11-26 18:25:32.609744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.171 [2024-11-26 18:25:32.609793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:58.171 [2024-11-26 18:25:32.609828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.380 ms 00:23:58.171 [2024-11-26 18:25:32.609839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.171 [2024-11-26 18:25:32.610171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.171 [2024-11-26 18:25:32.610199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:58.171 [2024-11-26 18:25:32.610214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:23:58.171 [2024-11-26 18:25:32.610225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.171 [2024-11-26 18:25:32.613928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.171 [2024-11-26 18:25:32.613989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:58.171 [2024-11-26 18:25:32.614007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.673 ms 00:23:58.171 [2024-11-26 18:25:32.614019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.171 [2024-11-26 18:25:32.620643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.171 [2024-11-26 18:25:32.620696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:58.171 [2024-11-26 18:25:32.620713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.561 ms 00:23:58.171 [2024-11-26 18:25:32.620723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.432 [2024-11-26 18:25:32.631691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.432 [2024-11-26 18:25:32.631757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:58.432 [2024-11-26 18:25:32.631793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.906 ms 00:23:58.432 [2024-11-26 18:25:32.631816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.432 [2024-11-26 18:25:32.640253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.432 [2024-11-26 18:25:32.640314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:58.432 [2024-11-26 18:25:32.640347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.389 ms 00:23:58.432 [2024-11-26 18:25:32.640358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.432 [2024-11-26 18:25:32.640503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.432 [2024-11-26 18:25:32.640523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:58.432 [2024-11-26 18:25:32.640537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:23:58.432 [2024-11-26 18:25:32.640563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.432 [2024-11-26 18:25:32.652261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.432 [2024-11-26 18:25:32.652318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:58.432 [2024-11-26 18:25:32.652357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.629 ms 00:23:58.432 [2024-11-26 18:25:32.652369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.432 [2024-11-26 18:25:32.663551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.432 [2024-11-26 18:25:32.663613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:58.432 [2024-11-26 18:25:32.663655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.129 ms 00:23:58.432 [2024-11-26 18:25:32.663667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.432 [2024-11-26 18:25:32.674550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.432 [2024-11-26 18:25:32.674616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:58.432 [2024-11-26 18:25:32.674654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.829 ms 00:23:58.432 [2024-11-26 18:25:32.674666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.432 [2024-11-26 18:25:32.685637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.432 [2024-11-26 18:25:32.685694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:58.432 [2024-11-26 18:25:32.685731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.887 ms 00:23:58.432 [2024-11-26 18:25:32.685742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.432 [2024-11-26 18:25:32.685793] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:58.432 [2024-11-26 18:25:32.685816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.685843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.685856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.685872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.685883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.685903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.685930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.685962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.685974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.685990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:58.432 [2024-11-26 18:25:32.686631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.686999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.687012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.687026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.687037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.687051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.687062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.687075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.687087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.687101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.687112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.687127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.687139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.687152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.687164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.687178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:58.433 [2024-11-26 18:25:32.687207] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:58.433 [2024-11-26 18:25:32.687225] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e417d21-16e8-4060-8f5a-5ce9752d454b 00:23:58.433 [2024-11-26 18:25:32.687241] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:58.433 [2024-11-26 18:25:32.687254] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:58.433 [2024-11-26 18:25:32.687266] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:58.433 [2024-11-26 18:25:32.687279] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:58.433 [2024-11-26 18:25:32.687289] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:58.433 [2024-11-26 18:25:32.687303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:58.433 [2024-11-26 18:25:32.687314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:58.433 [2024-11-26 18:25:32.687326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:58.433 [2024-11-26 18:25:32.687337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:58.433 [2024-11-26 18:25:32.687350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.433 [2024-11-26 18:25:32.687361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:58.433 [2024-11-26 18:25:32.687376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.566 ms 00:23:58.433 [2024-11-26 18:25:32.687389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.433 [2024-11-26 18:25:32.702821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.433 [2024-11-26 18:25:32.702909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:58.433 [2024-11-26 18:25:32.702946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.385 ms 00:23:58.433 [2024-11-26 18:25:32.702957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.433 [2024-11-26 18:25:32.703482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.433 [2024-11-26 18:25:32.703579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:58.433 [2024-11-26 18:25:32.703603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:23:58.433 [2024-11-26 18:25:32.703615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.433 [2024-11-26 18:25:32.756686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.433 [2024-11-26 18:25:32.756752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:58.433 [2024-11-26 18:25:32.756791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.433 [2024-11-26 18:25:32.756804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.433 [2024-11-26 18:25:32.756918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.433 [2024-11-26 18:25:32.756938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:58.433 [2024-11-26 18:25:32.756964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.433 [2024-11-26 18:25:32.756976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.433 [2024-11-26 18:25:32.757062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.433 [2024-11-26 18:25:32.757081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:58.433 [2024-11-26 18:25:32.757111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.433 [2024-11-26 18:25:32.757123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.433 [2024-11-26 18:25:32.757156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.433 [2024-11-26 18:25:32.757171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:58.433 [2024-11-26 18:25:32.757188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.433 [2024-11-26 18:25:32.757205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.433 [2024-11-26 18:25:32.850211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.433 [2024-11-26 18:25:32.850301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:58.433 [2024-11-26 18:25:32.850339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.433 [2024-11-26 18:25:32.850351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.692 [2024-11-26 18:25:32.926279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.692 [2024-11-26 18:25:32.926341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:58.692 [2024-11-26 18:25:32.926401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.692 [2024-11-26 18:25:32.926415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.692 [2024-11-26 18:25:32.926537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.692 [2024-11-26 18:25:32.926599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:58.692 [2024-11-26 18:25:32.926627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.692 [2024-11-26 18:25:32.926641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.692 [2024-11-26 18:25:32.926690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.692 [2024-11-26 18:25:32.926707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:58.692 [2024-11-26 18:25:32.926726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.692 [2024-11-26 18:25:32.926738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.692 [2024-11-26 18:25:32.926908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.692 [2024-11-26 18:25:32.926934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:58.692 [2024-11-26 18:25:32.926955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.692 [2024-11-26 18:25:32.926969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.692 [2024-11-26 18:25:32.927038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.692 [2024-11-26 18:25:32.927058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:58.692 [2024-11-26 18:25:32.927077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.692 [2024-11-26 18:25:32.927089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.692 [2024-11-26 18:25:32.927156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.692 [2024-11-26 18:25:32.927173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:58.692 [2024-11-26 18:25:32.927196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.692 [2024-11-26 18:25:32.927209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.692 [2024-11-26 18:25:32.927276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.692 [2024-11-26 18:25:32.927310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:58.692 [2024-11-26 18:25:32.927330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.692 [2024-11-26 18:25:32.927343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.692 [2024-11-26 18:25:32.927546] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 321.361 ms, result 0 00:23:59.626 18:25:33 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:59.626 18:25:33 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:59.626 [2024-11-26 18:25:33.928356] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:23:59.626 [2024-11-26 18:25:33.928622] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78774 ] 00:23:59.883 [2024-11-26 18:25:34.111102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.883 [2024-11-26 18:25:34.234292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.141 [2024-11-26 18:25:34.576381] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:00.141 [2024-11-26 18:25:34.576496] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:00.399 [2024-11-26 18:25:34.739107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.399 [2024-11-26 18:25:34.739215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:00.399 [2024-11-26 18:25:34.739253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:00.399 [2024-11-26 18:25:34.739265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.399 [2024-11-26 18:25:34.742826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.399 [2024-11-26 18:25:34.742890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:00.400 [2024-11-26 18:25:34.742923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.518 ms 00:24:00.400 [2024-11-26 18:25:34.742934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.400 [2024-11-26 18:25:34.743105] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:00.400 [2024-11-26 18:25:34.744075] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:00.400 [2024-11-26 18:25:34.744128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.400 [2024-11-26 18:25:34.744142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:00.400 [2024-11-26 18:25:34.744155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.034 ms 00:24:00.400 [2024-11-26 18:25:34.744166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.400 [2024-11-26 18:25:34.746350] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:00.400 [2024-11-26 18:25:34.761608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.400 [2024-11-26 18:25:34.761699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:00.400 [2024-11-26 18:25:34.761737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.258 ms 00:24:00.400 [2024-11-26 18:25:34.761749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.400 [2024-11-26 18:25:34.761908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.400 [2024-11-26 18:25:34.761930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:00.400 [2024-11-26 18:25:34.761944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:00.400 [2024-11-26 18:25:34.761972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.400 [2024-11-26 18:25:34.771195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.400 [2024-11-26 18:25:34.771276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:00.400 [2024-11-26 18:25:34.771310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.125 ms 00:24:00.400 [2024-11-26 18:25:34.771322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.400 [2024-11-26 18:25:34.771503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.400 [2024-11-26 18:25:34.771525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:00.400 [2024-11-26 18:25:34.771539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:24:00.400 [2024-11-26 18:25:34.771550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.400 [2024-11-26 18:25:34.771661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.400 [2024-11-26 18:25:34.771681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:00.400 [2024-11-26 18:25:34.771694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:00.400 [2024-11-26 18:25:34.771706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.400 [2024-11-26 18:25:34.771745] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:00.400 [2024-11-26 18:25:34.776655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.400 [2024-11-26 18:25:34.776710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:00.400 [2024-11-26 18:25:34.776741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.918 ms 00:24:00.400 [2024-11-26 18:25:34.776753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.400 [2024-11-26 18:25:34.776847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.400 [2024-11-26 18:25:34.776867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:00.400 [2024-11-26 18:25:34.776883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:00.400 [2024-11-26 18:25:34.776894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.400 [2024-11-26 18:25:34.776933] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:00.400 [2024-11-26 18:25:34.776979] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:00.400 [2024-11-26 18:25:34.777037] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:00.400 [2024-11-26 18:25:34.777058] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:00.400 [2024-11-26 18:25:34.777163] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:00.400 [2024-11-26 18:25:34.777185] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:00.400 [2024-11-26 18:25:34.777200] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:00.400 [2024-11-26 18:25:34.777221] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:00.400 [2024-11-26 18:25:34.777235] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:00.400 [2024-11-26 18:25:34.777259] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:00.400 [2024-11-26 18:25:34.777270] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:00.400 [2024-11-26 18:25:34.777281] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:00.400 [2024-11-26 18:25:34.777291] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:00.400 [2024-11-26 18:25:34.777304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.400 [2024-11-26 18:25:34.777314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:00.400 [2024-11-26 18:25:34.777325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:24:00.400 [2024-11-26 18:25:34.777336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.400 [2024-11-26 18:25:34.777431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.400 [2024-11-26 18:25:34.777454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:00.400 [2024-11-26 18:25:34.777466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:00.400 [2024-11-26 18:25:34.777477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.400 [2024-11-26 18:25:34.777607] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:00.400 [2024-11-26 18:25:34.777628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:00.400 [2024-11-26 18:25:34.777649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:00.400 [2024-11-26 18:25:34.777660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.400 [2024-11-26 18:25:34.777671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:00.400 [2024-11-26 18:25:34.777685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:00.400 [2024-11-26 18:25:34.777698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:00.400 [2024-11-26 18:25:34.777709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:00.400 [2024-11-26 18:25:34.777719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:00.400 [2024-11-26 18:25:34.777729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:00.400 [2024-11-26 18:25:34.777740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:00.400 [2024-11-26 18:25:34.777765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:00.400 [2024-11-26 18:25:34.777775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:00.400 [2024-11-26 18:25:34.777786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:00.400 [2024-11-26 18:25:34.777797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:00.400 [2024-11-26 18:25:34.777807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.400 [2024-11-26 18:25:34.777816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:00.400 [2024-11-26 18:25:34.777826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:00.400 [2024-11-26 18:25:34.777836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.400 [2024-11-26 18:25:34.777846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:00.400 [2024-11-26 18:25:34.777856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:00.400 [2024-11-26 18:25:34.777865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:00.400 [2024-11-26 18:25:34.777875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:00.400 [2024-11-26 18:25:34.777885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:00.400 [2024-11-26 18:25:34.777894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:00.400 [2024-11-26 18:25:34.777904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:00.400 [2024-11-26 18:25:34.777913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:00.400 [2024-11-26 18:25:34.777923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:00.400 [2024-11-26 18:25:34.777933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:00.400 [2024-11-26 18:25:34.777943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:00.400 [2024-11-26 18:25:34.777952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:00.400 [2024-11-26 18:25:34.777962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:00.400 [2024-11-26 18:25:34.777972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:00.400 [2024-11-26 18:25:34.777982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:00.400 [2024-11-26 18:25:34.777992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:00.400 [2024-11-26 18:25:34.778002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:00.400 [2024-11-26 18:25:34.778012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:00.400 [2024-11-26 18:25:34.778021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:00.400 [2024-11-26 18:25:34.778033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:00.400 [2024-11-26 18:25:34.778043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.400 [2024-11-26 18:25:34.778052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:00.400 [2024-11-26 18:25:34.778062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:00.400 [2024-11-26 18:25:34.778072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.400 [2024-11-26 18:25:34.778082] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:00.400 [2024-11-26 18:25:34.778094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:00.400 [2024-11-26 18:25:34.778109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:00.401 [2024-11-26 18:25:34.778120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:00.401 [2024-11-26 18:25:34.778133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:00.401 [2024-11-26 18:25:34.778143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:00.401 [2024-11-26 18:25:34.778153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:00.401 [2024-11-26 18:25:34.778164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:00.401 [2024-11-26 18:25:34.778173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:00.401 [2024-11-26 18:25:34.778184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:00.401 [2024-11-26 18:25:34.778195] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:00.401 [2024-11-26 18:25:34.778209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:00.401 [2024-11-26 18:25:34.778221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:00.401 [2024-11-26 18:25:34.778232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:00.401 [2024-11-26 18:25:34.778243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:00.401 [2024-11-26 18:25:34.778254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:00.401 [2024-11-26 18:25:34.778265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:00.401 [2024-11-26 18:25:34.778275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:00.401 [2024-11-26 18:25:34.778286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:00.401 [2024-11-26 18:25:34.778296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:00.401 [2024-11-26 18:25:34.778307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:00.401 [2024-11-26 18:25:34.778317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:00.401 [2024-11-26 18:25:34.778328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:00.401 [2024-11-26 18:25:34.778338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:00.401 [2024-11-26 18:25:34.778350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:00.401 [2024-11-26 18:25:34.778361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:00.401 [2024-11-26 18:25:34.778372] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:00.401 [2024-11-26 18:25:34.778386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:00.401 [2024-11-26 18:25:34.778399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:00.401 [2024-11-26 18:25:34.778410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:00.401 [2024-11-26 18:25:34.778421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:00.401 [2024-11-26 18:25:34.778432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:00.401 [2024-11-26 18:25:34.778445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.401 [2024-11-26 18:25:34.778463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:00.401 [2024-11-26 18:25:34.778475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:24:00.401 [2024-11-26 18:25:34.778486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.401 [2024-11-26 18:25:34.815960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.401 [2024-11-26 18:25:34.816062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:00.401 [2024-11-26 18:25:34.816100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.344 ms 00:24:00.401 [2024-11-26 18:25:34.816113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.401 [2024-11-26 18:25:34.816318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.401 [2024-11-26 18:25:34.816339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:00.401 [2024-11-26 18:25:34.816351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:00.401 [2024-11-26 18:25:34.816363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.659 [2024-11-26 18:25:34.867381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.659 [2024-11-26 18:25:34.867483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:00.659 [2024-11-26 18:25:34.867529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.969 ms 00:24:00.659 [2024-11-26 18:25:34.867541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.659 [2024-11-26 18:25:34.867733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.659 [2024-11-26 18:25:34.867754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:00.659 [2024-11-26 18:25:34.867768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:00.659 [2024-11-26 18:25:34.867779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.659 [2024-11-26 18:25:34.868406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.659 [2024-11-26 18:25:34.868449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:00.659 [2024-11-26 18:25:34.868473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:24:00.659 [2024-11-26 18:25:34.868485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.659 [2024-11-26 18:25:34.868719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.659 [2024-11-26 18:25:34.868740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:00.659 [2024-11-26 18:25:34.868753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:24:00.659 [2024-11-26 18:25:34.868765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.659 [2024-11-26 18:25:34.887715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.659 [2024-11-26 18:25:34.887806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:00.659 [2024-11-26 18:25:34.887827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.916 ms 00:24:00.659 [2024-11-26 18:25:34.887839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.659 [2024-11-26 18:25:34.903059] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:00.659 [2024-11-26 18:25:34.903121] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:00.659 [2024-11-26 18:25:34.903155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.659 [2024-11-26 18:25:34.903167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:00.659 [2024-11-26 18:25:34.903180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.122 ms 00:24:00.659 [2024-11-26 18:25:34.903192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.659 [2024-11-26 18:25:34.928950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.659 [2024-11-26 18:25:34.929069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:00.659 [2024-11-26 18:25:34.929111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.661 ms 00:24:00.659 [2024-11-26 18:25:34.929123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.659 [2024-11-26 18:25:34.944476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.659 [2024-11-26 18:25:34.944544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:00.659 [2024-11-26 18:25:34.944586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.173 ms 00:24:00.659 [2024-11-26 18:25:34.944598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.659 [2024-11-26 18:25:34.958403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.659 [2024-11-26 18:25:34.958462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:00.659 [2024-11-26 18:25:34.958494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.693 ms 00:24:00.659 [2024-11-26 18:25:34.958530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.659 [2024-11-26 18:25:34.959525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.659 [2024-11-26 18:25:34.959635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:00.659 [2024-11-26 18:25:34.959652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:24:00.659 [2024-11-26 18:25:34.959665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.659 [2024-11-26 18:25:35.032017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.659 [2024-11-26 18:25:35.032124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:00.659 [2024-11-26 18:25:35.032161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.316 ms 00:24:00.659 [2024-11-26 18:25:35.032173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.660 [2024-11-26 18:25:35.043123] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:00.660 [2024-11-26 18:25:35.063358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.660 [2024-11-26 18:25:35.063447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:00.660 [2024-11-26 18:25:35.063483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.029 ms 00:24:00.660 [2024-11-26 18:25:35.063503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.660 [2024-11-26 18:25:35.063682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.660 [2024-11-26 18:25:35.063704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:00.660 [2024-11-26 18:25:35.063718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:00.660 [2024-11-26 18:25:35.063744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.660 [2024-11-26 18:25:35.063857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.660 [2024-11-26 18:25:35.063875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:00.660 [2024-11-26 18:25:35.063888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:24:00.660 [2024-11-26 18:25:35.063907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.660 [2024-11-26 18:25:35.063990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.660 [2024-11-26 18:25:35.064009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:00.660 [2024-11-26 18:25:35.064022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:00.660 [2024-11-26 18:25:35.064034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.660 [2024-11-26 18:25:35.064085] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:00.660 [2024-11-26 18:25:35.064104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.660 [2024-11-26 18:25:35.064116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:00.660 [2024-11-26 18:25:35.064129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:00.660 [2024-11-26 18:25:35.064140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.660 [2024-11-26 18:25:35.094100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.660 [2024-11-26 18:25:35.094202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:00.660 [2024-11-26 18:25:35.094242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.930 ms 00:24:00.660 [2024-11-26 18:25:35.094254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.660 [2024-11-26 18:25:35.094424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.660 [2024-11-26 18:25:35.094445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:00.660 [2024-11-26 18:25:35.094459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:00.660 [2024-11-26 18:25:35.094470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.660 [2024-11-26 18:25:35.096041] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:00.660 [2024-11-26 18:25:35.099959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 356.508 ms, result 0 00:24:00.660 [2024-11-26 18:25:35.100954] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:00.660 [2024-11-26 18:25:35.116311] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:02.031  [2024-11-26T18:25:37.426Z] Copying: 23/256 [MB] (23 MBps) [2024-11-26T18:25:38.382Z] Copying: 44/256 [MB] (21 MBps) [2024-11-26T18:25:39.322Z] Copying: 65/256 [MB] (20 MBps) [2024-11-26T18:25:40.262Z] Copying: 86/256 [MB] (20 MBps) [2024-11-26T18:25:41.198Z] Copying: 107/256 [MB] (21 MBps) [2024-11-26T18:25:42.133Z] Copying: 128/256 [MB] (20 MBps) [2024-11-26T18:25:43.509Z] Copying: 148/256 [MB] (20 MBps) [2024-11-26T18:25:44.444Z] Copying: 168/256 [MB] (20 MBps) [2024-11-26T18:25:45.380Z] Copying: 190/256 [MB] (21 MBps) [2024-11-26T18:25:46.316Z] Copying: 211/256 [MB] (21 MBps) [2024-11-26T18:25:47.300Z] Copying: 232/256 [MB] (20 MBps) [2024-11-26T18:25:47.300Z] Copying: 253/256 [MB] (20 MBps) [2024-11-26T18:25:47.300Z] Copying: 256/256 [MB] (average 21 MBps)[2024-11-26 18:25:47.247820] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:12.839 [2024-11-26 18:25:47.259121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.839 [2024-11-26 18:25:47.259192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:12.839 [2024-11-26 18:25:47.259243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:12.839 [2024-11-26 18:25:47.259255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.839 [2024-11-26 18:25:47.259286] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:12.839 [2024-11-26 18:25:47.262549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.839 [2024-11-26 18:25:47.262611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:12.839 [2024-11-26 18:25:47.262626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.242 ms 00:24:12.839 [2024-11-26 18:25:47.262637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.839 [2024-11-26 18:25:47.262979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.839 [2024-11-26 18:25:47.263006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:12.839 [2024-11-26 18:25:47.263020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:24:12.839 [2024-11-26 18:25:47.263031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.839 [2024-11-26 18:25:47.266264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.839 [2024-11-26 18:25:47.266310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:12.839 [2024-11-26 18:25:47.266323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.189 ms 00:24:12.839 [2024-11-26 18:25:47.266334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.839 [2024-11-26 18:25:47.272728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.839 [2024-11-26 18:25:47.272778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:12.839 [2024-11-26 18:25:47.272805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.369 ms 00:24:12.839 [2024-11-26 18:25:47.272816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.111 [2024-11-26 18:25:47.299762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.111 [2024-11-26 18:25:47.299845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:13.111 [2024-11-26 18:25:47.299882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.860 ms 00:24:13.111 [2024-11-26 18:25:47.299893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.111 [2024-11-26 18:25:47.315703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.111 [2024-11-26 18:25:47.315800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:13.111 [2024-11-26 18:25:47.315848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.747 ms 00:24:13.112 [2024-11-26 18:25:47.315860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.112 [2024-11-26 18:25:47.316072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.112 [2024-11-26 18:25:47.316093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:13.112 [2024-11-26 18:25:47.316123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:24:13.112 [2024-11-26 18:25:47.316135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.112 [2024-11-26 18:25:47.342637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.112 [2024-11-26 18:25:47.342747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:13.112 [2024-11-26 18:25:47.342784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.474 ms 00:24:13.112 [2024-11-26 18:25:47.342797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.112 [2024-11-26 18:25:47.368982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.112 [2024-11-26 18:25:47.369086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:13.112 [2024-11-26 18:25:47.369123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.100 ms 00:24:13.112 [2024-11-26 18:25:47.369135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.112 [2024-11-26 18:25:47.394897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.112 [2024-11-26 18:25:47.394995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:13.112 [2024-11-26 18:25:47.395031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.673 ms 00:24:13.112 [2024-11-26 18:25:47.395044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.112 [2024-11-26 18:25:47.423819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.112 [2024-11-26 18:25:47.423916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:13.112 [2024-11-26 18:25:47.423969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.633 ms 00:24:13.112 [2024-11-26 18:25:47.423997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.112 [2024-11-26 18:25:47.424084] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:13.112 [2024-11-26 18:25:47.424122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:13.112 [2024-11-26 18:25:47.424139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:13.112 [2024-11-26 18:25:47.424153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:13.112 [2024-11-26 18:25:47.424166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:13.112 [2024-11-26 18:25:47.424179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:13.112 [2024-11-26 18:25:47.424191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:13.112 [2024-11-26 18:25:47.424203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:13.113 [2024-11-26 18:25:47.424216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:13.113 [2024-11-26 18:25:47.424228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:13.113 [2024-11-26 18:25:47.424240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:13.113 [2024-11-26 18:25:47.424253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:13.113 [2024-11-26 18:25:47.424265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:13.113 [2024-11-26 18:25:47.424279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:13.113 [2024-11-26 18:25:47.424291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:13.113 [2024-11-26 18:25:47.424304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:13.113 [2024-11-26 18:25:47.424317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:13.113 [2024-11-26 18:25:47.424329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:13.113 [2024-11-26 18:25:47.424357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:13.113 [2024-11-26 18:25:47.424369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:13.113 [2024-11-26 18:25:47.424382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:13.113 [2024-11-26 18:25:47.424393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:13.113 [2024-11-26 18:25:47.424405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:13.113 [2024-11-26 18:25:47.424416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:13.114 [2024-11-26 18:25:47.424428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:13.114 [2024-11-26 18:25:47.424439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:13.114 [2024-11-26 18:25:47.424468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:13.114 [2024-11-26 18:25:47.424481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:13.114 [2024-11-26 18:25:47.424493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:13.114 [2024-11-26 18:25:47.424506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:13.114 [2024-11-26 18:25:47.424520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:13.114 [2024-11-26 18:25:47.424533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:13.114 [2024-11-26 18:25:47.424546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:13.114 [2024-11-26 18:25:47.424558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:13.114 [2024-11-26 18:25:47.424570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:13.114 [2024-11-26 18:25:47.424595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:13.114 [2024-11-26 18:25:47.424611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:13.114 [2024-11-26 18:25:47.424626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:13.114 [2024-11-26 18:25:47.424638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:13.114 [2024-11-26 18:25:47.424650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:13.115 [2024-11-26 18:25:47.424662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:13.115 [2024-11-26 18:25:47.424674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:13.115 [2024-11-26 18:25:47.424687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:13.115 [2024-11-26 18:25:47.424699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:13.115 [2024-11-26 18:25:47.424726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:13.115 [2024-11-26 18:25:47.424738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:13.115 [2024-11-26 18:25:47.424750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:13.115 [2024-11-26 18:25:47.424774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:13.115 [2024-11-26 18:25:47.424785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:13.115 [2024-11-26 18:25:47.424796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:13.115 [2024-11-26 18:25:47.424808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:13.119 [2024-11-26 18:25:47.424819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:13.119 [2024-11-26 18:25:47.424830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:13.119 [2024-11-26 18:25:47.424842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:13.119 [2024-11-26 18:25:47.424853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:13.119 [2024-11-26 18:25:47.424865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:13.119 [2024-11-26 18:25:47.424876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:13.119 [2024-11-26 18:25:47.424888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:13.119 [2024-11-26 18:25:47.424899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:13.119 [2024-11-26 18:25:47.424910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:13.119 [2024-11-26 18:25:47.424922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:13.119 [2024-11-26 18:25:47.424934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:13.119 [2024-11-26 18:25:47.424948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:13.119 [2024-11-26 18:25:47.424977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:13.119 [2024-11-26 18:25:47.424989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:13.119 [2024-11-26 18:25:47.425002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:13.119 [2024-11-26 18:25:47.425015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:13.120 [2024-11-26 18:25:47.425277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:13.121 [2024-11-26 18:25:47.425289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:13.121 [2024-11-26 18:25:47.425302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:13.121 [2024-11-26 18:25:47.425314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:13.121 [2024-11-26 18:25:47.425326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:13.121 [2024-11-26 18:25:47.425352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:13.121 [2024-11-26 18:25:47.425364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:13.121 [2024-11-26 18:25:47.425410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:13.121 [2024-11-26 18:25:47.425429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:13.121 [2024-11-26 18:25:47.425443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:13.121 [2024-11-26 18:25:47.425455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:13.121 [2024-11-26 18:25:47.425467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:13.121 [2024-11-26 18:25:47.425479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:13.121 [2024-11-26 18:25:47.425491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:13.121 [2024-11-26 18:25:47.425513] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:13.121 [2024-11-26 18:25:47.425549] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e417d21-16e8-4060-8f5a-5ce9752d454b 00:24:13.121 [2024-11-26 18:25:47.425562] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:13.121 [2024-11-26 18:25:47.425572] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:13.121 [2024-11-26 18:25:47.425612] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:13.121 [2024-11-26 18:25:47.425625] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:13.121 [2024-11-26 18:25:47.425636] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:13.121 [2024-11-26 18:25:47.425647] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:13.122 [2024-11-26 18:25:47.425665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:13.122 [2024-11-26 18:25:47.425675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:13.122 [2024-11-26 18:25:47.425685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:13.122 [2024-11-26 18:25:47.425696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.122 [2024-11-26 18:25:47.425708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:13.122 [2024-11-26 18:25:47.425721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.615 ms 00:24:13.122 [2024-11-26 18:25:47.425749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.122 [2024-11-26 18:25:47.443123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.122 [2024-11-26 18:25:47.443509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:13.122 [2024-11-26 18:25:47.443541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.335 ms 00:24:13.122 [2024-11-26 18:25:47.443594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.122 [2024-11-26 18:25:47.444229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.122 [2024-11-26 18:25:47.444253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:13.122 [2024-11-26 18:25:47.444268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:24:13.122 [2024-11-26 18:25:47.444279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.122 [2024-11-26 18:25:47.485440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.122 [2024-11-26 18:25:47.485549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:13.122 [2024-11-26 18:25:47.485588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.122 [2024-11-26 18:25:47.485632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.122 [2024-11-26 18:25:47.485786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.122 [2024-11-26 18:25:47.485805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:13.123 [2024-11-26 18:25:47.485818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.123 [2024-11-26 18:25:47.485830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.123 [2024-11-26 18:25:47.485910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.123 [2024-11-26 18:25:47.485930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:13.123 [2024-11-26 18:25:47.485942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.123 [2024-11-26 18:25:47.485953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.123 [2024-11-26 18:25:47.485994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.123 [2024-11-26 18:25:47.486007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:13.123 [2024-11-26 18:25:47.486018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.123 [2024-11-26 18:25:47.486029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.386 [2024-11-26 18:25:47.589736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.386 [2024-11-26 18:25:47.589835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:13.386 [2024-11-26 18:25:47.589871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.386 [2024-11-26 18:25:47.589882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.386 [2024-11-26 18:25:47.663770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.386 [2024-11-26 18:25:47.663864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:13.386 [2024-11-26 18:25:47.663901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.386 [2024-11-26 18:25:47.663913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.386 [2024-11-26 18:25:47.664000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.386 [2024-11-26 18:25:47.664016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:13.386 [2024-11-26 18:25:47.664028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.386 [2024-11-26 18:25:47.664038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.386 [2024-11-26 18:25:47.664072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.386 [2024-11-26 18:25:47.664117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:13.386 [2024-11-26 18:25:47.664129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.386 [2024-11-26 18:25:47.664139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.386 [2024-11-26 18:25:47.664261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.386 [2024-11-26 18:25:47.664280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:13.386 [2024-11-26 18:25:47.664291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.386 [2024-11-26 18:25:47.664301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.386 [2024-11-26 18:25:47.664351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.386 [2024-11-26 18:25:47.664368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:13.386 [2024-11-26 18:25:47.664394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.386 [2024-11-26 18:25:47.664405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.386 [2024-11-26 18:25:47.664453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.386 [2024-11-26 18:25:47.664467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:13.386 [2024-11-26 18:25:47.664477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.387 [2024-11-26 18:25:47.664488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.387 [2024-11-26 18:25:47.664538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:13.387 [2024-11-26 18:25:47.664629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:13.387 [2024-11-26 18:25:47.664642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:13.387 [2024-11-26 18:25:47.664653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.387 [2024-11-26 18:25:47.664875] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 405.737 ms, result 0 00:24:14.319 00:24:14.319 00:24:14.319 18:25:48 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:24:14.319 18:25:48 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:14.886 18:25:49 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:14.886 [2024-11-26 18:25:49.271256] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:24:14.886 [2024-11-26 18:25:49.271748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78929 ] 00:24:15.144 [2024-11-26 18:25:49.454657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.144 [2024-11-26 18:25:49.581486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.711 [2024-11-26 18:25:49.921682] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:15.711 [2024-11-26 18:25:49.922124] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:15.711 [2024-11-26 18:25:50.087798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.711 [2024-11-26 18:25:50.087882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:15.711 [2024-11-26 18:25:50.087918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:15.711 [2024-11-26 18:25:50.087930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.711 [2024-11-26 18:25:50.091377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.711 [2024-11-26 18:25:50.091422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:15.711 [2024-11-26 18:25:50.091454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.418 ms 00:24:15.711 [2024-11-26 18:25:50.091464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.711 [2024-11-26 18:25:50.091665] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:15.711 [2024-11-26 18:25:50.092736] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:15.711 [2024-11-26 18:25:50.092778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.711 [2024-11-26 18:25:50.092808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:15.711 [2024-11-26 18:25:50.092821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.124 ms 00:24:15.711 [2024-11-26 18:25:50.092832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.711 [2024-11-26 18:25:50.095147] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:15.711 [2024-11-26 18:25:50.111162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.711 [2024-11-26 18:25:50.111205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:15.711 [2024-11-26 18:25:50.111238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.016 ms 00:24:15.711 [2024-11-26 18:25:50.111249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.711 [2024-11-26 18:25:50.111359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.711 [2024-11-26 18:25:50.111381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:15.711 [2024-11-26 18:25:50.111395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:15.711 [2024-11-26 18:25:50.111406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.711 [2024-11-26 18:25:50.120473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.711 [2024-11-26 18:25:50.120519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:15.711 [2024-11-26 18:25:50.120549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.014 ms 00:24:15.711 [2024-11-26 18:25:50.120561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.711 [2024-11-26 18:25:50.120749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.711 [2024-11-26 18:25:50.120772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:15.711 [2024-11-26 18:25:50.120786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:24:15.711 [2024-11-26 18:25:50.120798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.711 [2024-11-26 18:25:50.120843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.711 [2024-11-26 18:25:50.120859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:15.711 [2024-11-26 18:25:50.120880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:15.711 [2024-11-26 18:25:50.120891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.711 [2024-11-26 18:25:50.120921] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:15.711 [2024-11-26 18:25:50.125714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.711 [2024-11-26 18:25:50.125751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:15.711 [2024-11-26 18:25:50.125781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.801 ms 00:24:15.711 [2024-11-26 18:25:50.125793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.711 [2024-11-26 18:25:50.125874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.711 [2024-11-26 18:25:50.125894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:15.712 [2024-11-26 18:25:50.125906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:15.712 [2024-11-26 18:25:50.125917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.712 [2024-11-26 18:25:50.125969] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:15.712 [2024-11-26 18:25:50.126013] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:15.712 [2024-11-26 18:25:50.126054] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:15.712 [2024-11-26 18:25:50.126075] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:15.712 [2024-11-26 18:25:50.126176] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:15.712 [2024-11-26 18:25:50.126192] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:15.712 [2024-11-26 18:25:50.126206] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:15.712 [2024-11-26 18:25:50.126226] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:15.712 [2024-11-26 18:25:50.126239] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:15.712 [2024-11-26 18:25:50.126252] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:15.712 [2024-11-26 18:25:50.126263] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:15.712 [2024-11-26 18:25:50.126273] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:15.712 [2024-11-26 18:25:50.126284] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:15.712 [2024-11-26 18:25:50.126297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.712 [2024-11-26 18:25:50.126308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:15.712 [2024-11-26 18:25:50.126320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:24:15.712 [2024-11-26 18:25:50.126330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.712 [2024-11-26 18:25:50.126434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.712 [2024-11-26 18:25:50.126455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:15.712 [2024-11-26 18:25:50.126467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:24:15.712 [2024-11-26 18:25:50.126478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.712 [2024-11-26 18:25:50.126667] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:15.712 [2024-11-26 18:25:50.126707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:15.712 [2024-11-26 18:25:50.126721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:15.712 [2024-11-26 18:25:50.126733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.712 [2024-11-26 18:25:50.126746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:15.712 [2024-11-26 18:25:50.126757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:15.712 [2024-11-26 18:25:50.126769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:15.712 [2024-11-26 18:25:50.126781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:15.712 [2024-11-26 18:25:50.126795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:15.712 [2024-11-26 18:25:50.126805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:15.712 [2024-11-26 18:25:50.126817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:15.712 [2024-11-26 18:25:50.126857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:15.712 [2024-11-26 18:25:50.126883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:15.712 [2024-11-26 18:25:50.126893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:15.712 [2024-11-26 18:25:50.126904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:15.712 [2024-11-26 18:25:50.126917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.712 [2024-11-26 18:25:50.126928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:15.712 [2024-11-26 18:25:50.126939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:15.712 [2024-11-26 18:25:50.126950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.712 [2024-11-26 18:25:50.126961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:15.712 [2024-11-26 18:25:50.126972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:15.712 [2024-11-26 18:25:50.126998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:15.712 [2024-11-26 18:25:50.127026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:15.712 [2024-11-26 18:25:50.127051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:15.712 [2024-11-26 18:25:50.127062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:15.712 [2024-11-26 18:25:50.127072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:15.712 [2024-11-26 18:25:50.127083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:15.712 [2024-11-26 18:25:50.127093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:15.712 [2024-11-26 18:25:50.127104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:15.712 [2024-11-26 18:25:50.127114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:15.712 [2024-11-26 18:25:50.127125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:15.712 [2024-11-26 18:25:50.127135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:15.712 [2024-11-26 18:25:50.127146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:15.712 [2024-11-26 18:25:50.127157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:15.712 [2024-11-26 18:25:50.127167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:15.712 [2024-11-26 18:25:50.127178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:15.712 [2024-11-26 18:25:50.127189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:15.712 [2024-11-26 18:25:50.127200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:15.712 [2024-11-26 18:25:50.127211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:15.712 [2024-11-26 18:25:50.127221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.712 [2024-11-26 18:25:50.127232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:15.712 [2024-11-26 18:25:50.127242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:15.712 [2024-11-26 18:25:50.127253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.712 [2024-11-26 18:25:50.127263] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:15.712 [2024-11-26 18:25:50.127275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:15.712 [2024-11-26 18:25:50.127292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:15.712 [2024-11-26 18:25:50.127304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:15.712 [2024-11-26 18:25:50.127318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:15.712 [2024-11-26 18:25:50.127330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:15.712 [2024-11-26 18:25:50.127341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:15.712 [2024-11-26 18:25:50.127352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:15.712 [2024-11-26 18:25:50.127362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:15.712 [2024-11-26 18:25:50.127373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:15.712 [2024-11-26 18:25:50.127386] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:15.712 [2024-11-26 18:25:50.127400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:15.712 [2024-11-26 18:25:50.127413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:15.712 [2024-11-26 18:25:50.127425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:15.712 [2024-11-26 18:25:50.127436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:15.712 [2024-11-26 18:25:50.127448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:15.712 [2024-11-26 18:25:50.127459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:15.712 [2024-11-26 18:25:50.127471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:15.712 [2024-11-26 18:25:50.127482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:15.712 [2024-11-26 18:25:50.127494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:15.712 [2024-11-26 18:25:50.127505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:15.712 [2024-11-26 18:25:50.127517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:15.712 [2024-11-26 18:25:50.127534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:15.712 [2024-11-26 18:25:50.127546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:15.712 [2024-11-26 18:25:50.127557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:15.712 [2024-11-26 18:25:50.127569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:15.712 [2024-11-26 18:25:50.127581] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:15.712 [2024-11-26 18:25:50.127594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:15.712 [2024-11-26 18:25:50.127606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:15.712 [2024-11-26 18:25:50.127618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:15.713 [2024-11-26 18:25:50.127645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:15.713 [2024-11-26 18:25:50.127658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:15.713 [2024-11-26 18:25:50.127684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.713 [2024-11-26 18:25:50.127704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:15.713 [2024-11-26 18:25:50.127717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.107 ms 00:24:15.713 [2024-11-26 18:25:50.127728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.713 [2024-11-26 18:25:50.166802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.713 [2024-11-26 18:25:50.167219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:15.713 [2024-11-26 18:25:50.167253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.988 ms 00:24:15.713 [2024-11-26 18:25:50.167283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.713 [2024-11-26 18:25:50.167532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.713 [2024-11-26 18:25:50.167568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:15.713 [2024-11-26 18:25:50.167582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:24:15.713 [2024-11-26 18:25:50.167609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.971 [2024-11-26 18:25:50.221828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.971 [2024-11-26 18:25:50.221890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:15.971 [2024-11-26 18:25:50.221930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.125 ms 00:24:15.971 [2024-11-26 18:25:50.221942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.971 [2024-11-26 18:25:50.222107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.971 [2024-11-26 18:25:50.222129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:15.971 [2024-11-26 18:25:50.222142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:15.971 [2024-11-26 18:25:50.222154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.971 [2024-11-26 18:25:50.222859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.971 [2024-11-26 18:25:50.222912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:15.971 [2024-11-26 18:25:50.222937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:24:15.971 [2024-11-26 18:25:50.222949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.971 [2024-11-26 18:25:50.223155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.971 [2024-11-26 18:25:50.223182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:15.971 [2024-11-26 18:25:50.223196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:24:15.971 [2024-11-26 18:25:50.223208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.971 [2024-11-26 18:25:50.241929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.971 [2024-11-26 18:25:50.242174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:15.971 [2024-11-26 18:25:50.242216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.689 ms 00:24:15.971 [2024-11-26 18:25:50.242230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.971 [2024-11-26 18:25:50.257963] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:15.971 [2024-11-26 18:25:50.258024] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:15.971 [2024-11-26 18:25:50.258059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.971 [2024-11-26 18:25:50.258072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:15.971 [2024-11-26 18:25:50.258085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.673 ms 00:24:15.971 [2024-11-26 18:25:50.258095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.971 [2024-11-26 18:25:50.284328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.971 [2024-11-26 18:25:50.284372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:15.971 [2024-11-26 18:25:50.284404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.135 ms 00:24:15.971 [2024-11-26 18:25:50.284415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.971 [2024-11-26 18:25:50.299043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.971 [2024-11-26 18:25:50.299244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:15.971 [2024-11-26 18:25:50.299270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.535 ms 00:24:15.971 [2024-11-26 18:25:50.299282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.971 [2024-11-26 18:25:50.313368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.971 [2024-11-26 18:25:50.313578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:15.971 [2024-11-26 18:25:50.313605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.993 ms 00:24:15.971 [2024-11-26 18:25:50.313617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.971 [2024-11-26 18:25:50.314454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.971 [2024-11-26 18:25:50.314483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:15.971 [2024-11-26 18:25:50.314498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:24:15.971 [2024-11-26 18:25:50.314536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.971 [2024-11-26 18:25:50.391184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.971 [2024-11-26 18:25:50.391652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:15.971 [2024-11-26 18:25:50.391685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.589 ms 00:24:15.971 [2024-11-26 18:25:50.391699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.971 [2024-11-26 18:25:50.403908] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:15.972 [2024-11-26 18:25:50.424517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.972 [2024-11-26 18:25:50.424638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:15.972 [2024-11-26 18:25:50.424662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.638 ms 00:24:15.972 [2024-11-26 18:25:50.424684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.972 [2024-11-26 18:25:50.424848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.972 [2024-11-26 18:25:50.424871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:15.972 [2024-11-26 18:25:50.424885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:15.972 [2024-11-26 18:25:50.424897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.972 [2024-11-26 18:25:50.424974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.972 [2024-11-26 18:25:50.424994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:15.972 [2024-11-26 18:25:50.425007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:15.972 [2024-11-26 18:25:50.425024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.972 [2024-11-26 18:25:50.425120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.972 [2024-11-26 18:25:50.425138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:15.972 [2024-11-26 18:25:50.425152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:15.972 [2024-11-26 18:25:50.425164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:15.972 [2024-11-26 18:25:50.425215] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:15.972 [2024-11-26 18:25:50.425233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:15.972 [2024-11-26 18:25:50.425245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:15.972 [2024-11-26 18:25:50.425257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:24:15.972 [2024-11-26 18:25:50.425270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.230 [2024-11-26 18:25:50.454531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.230 [2024-11-26 18:25:50.454625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:16.230 [2024-11-26 18:25:50.454645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.232 ms 00:24:16.230 [2024-11-26 18:25:50.454658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.230 [2024-11-26 18:25:50.454839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.230 [2024-11-26 18:25:50.454864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:16.230 [2024-11-26 18:25:50.454893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:16.230 [2024-11-26 18:25:50.454921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.230 [2024-11-26 18:25:50.456270] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:16.230 [2024-11-26 18:25:50.460263] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 368.090 ms, result 0 00:24:16.230 [2024-11-26 18:25:50.461198] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:16.230 [2024-11-26 18:25:50.476860] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:16.230  [2024-11-26T18:25:50.691Z] Copying: 4096/4096 [kB] (average 20 MBps)[2024-11-26 18:25:50.671639] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:16.230 [2024-11-26 18:25:50.683404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.230 [2024-11-26 18:25:50.683449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:16.230 [2024-11-26 18:25:50.683523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:16.230 [2024-11-26 18:25:50.683536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.230 [2024-11-26 18:25:50.683568] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:16.230 [2024-11-26 18:25:50.687178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.230 [2024-11-26 18:25:50.687212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:16.230 [2024-11-26 18:25:50.687242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.587 ms 00:24:16.230 [2024-11-26 18:25:50.687253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.489 [2024-11-26 18:25:50.689338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.489 [2024-11-26 18:25:50.689390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:16.489 [2024-11-26 18:25:50.689406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.057 ms 00:24:16.489 [2024-11-26 18:25:50.689417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.489 [2024-11-26 18:25:50.693128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.489 [2024-11-26 18:25:50.693168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:16.489 [2024-11-26 18:25:50.693201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.679 ms 00:24:16.489 [2024-11-26 18:25:50.693212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.490 [2024-11-26 18:25:50.699994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.490 [2024-11-26 18:25:50.700031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:16.490 [2024-11-26 18:25:50.700061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.726 ms 00:24:16.490 [2024-11-26 18:25:50.700072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.490 [2024-11-26 18:25:50.728598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.490 [2024-11-26 18:25:50.728638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:16.490 [2024-11-26 18:25:50.728671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.447 ms 00:24:16.490 [2024-11-26 18:25:50.728682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.490 [2024-11-26 18:25:50.745393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.490 [2024-11-26 18:25:50.745443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:16.490 [2024-11-26 18:25:50.745476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.649 ms 00:24:16.490 [2024-11-26 18:25:50.745488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.490 [2024-11-26 18:25:50.745721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.490 [2024-11-26 18:25:50.745743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:16.490 [2024-11-26 18:25:50.745773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:24:16.490 [2024-11-26 18:25:50.745785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.490 [2024-11-26 18:25:50.774097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.490 [2024-11-26 18:25:50.774140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:16.490 [2024-11-26 18:25:50.774172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.287 ms 00:24:16.490 [2024-11-26 18:25:50.774183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.490 [2024-11-26 18:25:50.801889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.490 [2024-11-26 18:25:50.801931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:16.490 [2024-11-26 18:25:50.801978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.641 ms 00:24:16.490 [2024-11-26 18:25:50.801988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.490 [2024-11-26 18:25:50.829344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.490 [2024-11-26 18:25:50.829397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:16.490 [2024-11-26 18:25:50.829411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.294 ms 00:24:16.490 [2024-11-26 18:25:50.829421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.490 [2024-11-26 18:25:50.856745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.490 [2024-11-26 18:25:50.856797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:16.490 [2024-11-26 18:25:50.856812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.232 ms 00:24:16.490 [2024-11-26 18:25:50.856822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.490 [2024-11-26 18:25:50.856886] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:16.490 [2024-11-26 18:25:50.856911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.856926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.856937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.856948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.856959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.856969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.856980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.856990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:16.490 [2024-11-26 18:25:50.857711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.857986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.858012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.858026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.858037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.858049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.858060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.858071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.858083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.858096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.858138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.858151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.858163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.858175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.858186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.858198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.858210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:16.491 [2024-11-26 18:25:50.858230] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:16.491 [2024-11-26 18:25:50.858242] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e417d21-16e8-4060-8f5a-5ce9752d454b 00:24:16.491 [2024-11-26 18:25:50.858253] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:16.491 [2024-11-26 18:25:50.858264] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:16.491 [2024-11-26 18:25:50.858275] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:16.491 [2024-11-26 18:25:50.858286] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:16.491 [2024-11-26 18:25:50.858297] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:16.491 [2024-11-26 18:25:50.858308] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:16.491 [2024-11-26 18:25:50.858324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:16.491 [2024-11-26 18:25:50.858335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:16.491 [2024-11-26 18:25:50.858344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:16.491 [2024-11-26 18:25:50.858355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.491 [2024-11-26 18:25:50.858367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:16.491 [2024-11-26 18:25:50.858394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.471 ms 00:24:16.491 [2024-11-26 18:25:50.858404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.491 [2024-11-26 18:25:50.874475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.491 [2024-11-26 18:25:50.874516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:16.491 [2024-11-26 18:25:50.874549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.044 ms 00:24:16.491 [2024-11-26 18:25:50.874561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.491 [2024-11-26 18:25:50.875101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:16.491 [2024-11-26 18:25:50.875129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:16.491 [2024-11-26 18:25:50.875143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:24:16.491 [2024-11-26 18:25:50.875154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.491 [2024-11-26 18:25:50.919050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.491 [2024-11-26 18:25:50.919121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:16.491 [2024-11-26 18:25:50.919136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.491 [2024-11-26 18:25:50.919153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.491 [2024-11-26 18:25:50.919236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.491 [2024-11-26 18:25:50.919253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:16.491 [2024-11-26 18:25:50.919265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.491 [2024-11-26 18:25:50.919276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.491 [2024-11-26 18:25:50.919335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.491 [2024-11-26 18:25:50.919353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:16.491 [2024-11-26 18:25:50.919379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.491 [2024-11-26 18:25:50.919390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.491 [2024-11-26 18:25:50.919421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.491 [2024-11-26 18:25:50.919434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:16.491 [2024-11-26 18:25:50.919445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.491 [2024-11-26 18:25:50.919456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.749 [2024-11-26 18:25:51.013424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.749 [2024-11-26 18:25:51.013508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:16.749 [2024-11-26 18:25:51.013525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.749 [2024-11-26 18:25:51.013543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.749 [2024-11-26 18:25:51.089240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.749 [2024-11-26 18:25:51.089334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:16.749 [2024-11-26 18:25:51.089367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.749 [2024-11-26 18:25:51.089378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.749 [2024-11-26 18:25:51.089465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.749 [2024-11-26 18:25:51.089483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:16.749 [2024-11-26 18:25:51.089496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.749 [2024-11-26 18:25:51.089507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.749 [2024-11-26 18:25:51.089543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.749 [2024-11-26 18:25:51.089565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:16.749 [2024-11-26 18:25:51.089577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.749 [2024-11-26 18:25:51.089644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.749 [2024-11-26 18:25:51.089782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.749 [2024-11-26 18:25:51.089802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:16.749 [2024-11-26 18:25:51.089815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.749 [2024-11-26 18:25:51.089827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.749 [2024-11-26 18:25:51.089880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.749 [2024-11-26 18:25:51.089899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:16.749 [2024-11-26 18:25:51.089918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.749 [2024-11-26 18:25:51.089929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.749 [2024-11-26 18:25:51.090024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.750 [2024-11-26 18:25:51.090039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:16.750 [2024-11-26 18:25:51.090051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.750 [2024-11-26 18:25:51.090062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.750 [2024-11-26 18:25:51.090115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:16.750 [2024-11-26 18:25:51.090148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:16.750 [2024-11-26 18:25:51.090160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:16.750 [2024-11-26 18:25:51.090171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:16.750 [2024-11-26 18:25:51.090342] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 406.924 ms, result 0 00:24:17.684 00:24:17.684 00:24:17.684 18:25:52 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78964 00:24:17.684 18:25:52 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:24:17.684 18:25:52 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78964 00:24:17.684 18:25:52 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78964 ']' 00:24:17.685 18:25:52 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.685 18:25:52 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.685 18:25:52 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.685 18:25:52 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.685 18:25:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:17.943 [2024-11-26 18:25:52.169996] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:24:17.943 [2024-11-26 18:25:52.170168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78964 ] 00:24:17.943 [2024-11-26 18:25:52.353532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.201 [2024-11-26 18:25:52.460893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.135 18:25:53 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.135 18:25:53 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:24:19.135 18:25:53 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:24:19.135 [2024-11-26 18:25:53.575442] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:19.135 [2024-11-26 18:25:53.575531] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:19.394 [2024-11-26 18:25:53.766869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.394 [2024-11-26 18:25:53.766947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:19.394 [2024-11-26 18:25:53.766987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:19.395 [2024-11-26 18:25:53.767001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.395 [2024-11-26 18:25:53.770899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.395 [2024-11-26 18:25:53.770953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:19.395 [2024-11-26 18:25:53.770982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.860 ms 00:24:19.395 [2024-11-26 18:25:53.770994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.395 [2024-11-26 18:25:53.771145] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:19.395 [2024-11-26 18:25:53.772056] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:19.395 [2024-11-26 18:25:53.772092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.395 [2024-11-26 18:25:53.772120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:19.395 [2024-11-26 18:25:53.772134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:24:19.395 [2024-11-26 18:25:53.772146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.395 [2024-11-26 18:25:53.774224] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:19.395 [2024-11-26 18:25:53.789681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.395 [2024-11-26 18:25:53.789747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:19.395 [2024-11-26 18:25:53.789765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.463 ms 00:24:19.395 [2024-11-26 18:25:53.789783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.395 [2024-11-26 18:25:53.789898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.395 [2024-11-26 18:25:53.789941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:19.395 [2024-11-26 18:25:53.789971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:19.395 [2024-11-26 18:25:53.789987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.395 [2024-11-26 18:25:53.799093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.395 [2024-11-26 18:25:53.799162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:19.395 [2024-11-26 18:25:53.799178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.031 ms 00:24:19.395 [2024-11-26 18:25:53.799196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.395 [2024-11-26 18:25:53.799360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.395 [2024-11-26 18:25:53.799404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:19.395 [2024-11-26 18:25:53.799418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:24:19.395 [2024-11-26 18:25:53.799444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.395 [2024-11-26 18:25:53.799479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.395 [2024-11-26 18:25:53.799500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:19.395 [2024-11-26 18:25:53.799513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:19.395 [2024-11-26 18:25:53.799528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.395 [2024-11-26 18:25:53.799591] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:19.395 [2024-11-26 18:25:53.804350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.395 [2024-11-26 18:25:53.804398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:19.395 [2024-11-26 18:25:53.804418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.762 ms 00:24:19.395 [2024-11-26 18:25:53.804430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.395 [2024-11-26 18:25:53.804522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.395 [2024-11-26 18:25:53.804539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:19.395 [2024-11-26 18:25:53.804577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:19.395 [2024-11-26 18:25:53.804591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.395 [2024-11-26 18:25:53.804659] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:19.395 [2024-11-26 18:25:53.804689] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:19.395 [2024-11-26 18:25:53.804764] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:19.395 [2024-11-26 18:25:53.804789] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:19.395 [2024-11-26 18:25:53.804902] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:19.395 [2024-11-26 18:25:53.804919] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:19.395 [2024-11-26 18:25:53.804951] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:19.395 [2024-11-26 18:25:53.804967] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:19.395 [2024-11-26 18:25:53.804986] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:19.395 [2024-11-26 18:25:53.804999] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:19.395 [2024-11-26 18:25:53.805016] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:19.395 [2024-11-26 18:25:53.805028] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:19.395 [2024-11-26 18:25:53.805064] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:19.395 [2024-11-26 18:25:53.805107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.395 [2024-11-26 18:25:53.805124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:19.395 [2024-11-26 18:25:53.805136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:24:19.395 [2024-11-26 18:25:53.805159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.395 [2024-11-26 18:25:53.805252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.395 [2024-11-26 18:25:53.805274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:19.395 [2024-11-26 18:25:53.805287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:19.395 [2024-11-26 18:25:53.805303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.395 [2024-11-26 18:25:53.805410] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:19.395 [2024-11-26 18:25:53.805433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:19.395 [2024-11-26 18:25:53.805446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:19.395 [2024-11-26 18:25:53.805463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.395 [2024-11-26 18:25:53.805477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:19.395 [2024-11-26 18:25:53.805494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:19.395 [2024-11-26 18:25:53.805505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:19.395 [2024-11-26 18:25:53.805527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:19.395 [2024-11-26 18:25:53.805539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:19.395 [2024-11-26 18:25:53.805585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:19.395 [2024-11-26 18:25:53.805600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:19.395 [2024-11-26 18:25:53.805617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:19.395 [2024-11-26 18:25:53.805629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:19.395 [2024-11-26 18:25:53.805645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:19.395 [2024-11-26 18:25:53.805657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:19.395 [2024-11-26 18:25:53.805673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.395 [2024-11-26 18:25:53.805684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:19.395 [2024-11-26 18:25:53.805715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:19.395 [2024-11-26 18:25:53.805756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.395 [2024-11-26 18:25:53.805774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:19.395 [2024-11-26 18:25:53.805786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:19.395 [2024-11-26 18:25:53.805802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:19.395 [2024-11-26 18:25:53.805813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:19.395 [2024-11-26 18:25:53.805834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:19.395 [2024-11-26 18:25:53.805845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:19.395 [2024-11-26 18:25:53.805861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:19.395 [2024-11-26 18:25:53.805873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:19.395 [2024-11-26 18:25:53.805889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:19.395 [2024-11-26 18:25:53.805916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:19.395 [2024-11-26 18:25:53.805932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:19.395 [2024-11-26 18:25:53.805943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:19.395 [2024-11-26 18:25:53.805959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:19.395 [2024-11-26 18:25:53.805970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:19.395 [2024-11-26 18:25:53.805987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:19.395 [2024-11-26 18:25:53.806000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:19.395 [2024-11-26 18:25:53.806024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:19.395 [2024-11-26 18:25:53.806036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:19.395 [2024-11-26 18:25:53.806058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:19.395 [2024-11-26 18:25:53.806069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:19.395 [2024-11-26 18:25:53.806089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.395 [2024-11-26 18:25:53.806100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:19.395 [2024-11-26 18:25:53.806116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:19.395 [2024-11-26 18:25:53.806127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.395 [2024-11-26 18:25:53.806139] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:19.396 [2024-11-26 18:25:53.806153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:19.396 [2024-11-26 18:25:53.806166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:19.396 [2024-11-26 18:25:53.806177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.396 [2024-11-26 18:25:53.806190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:19.396 [2024-11-26 18:25:53.806201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:19.396 [2024-11-26 18:25:53.806213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:19.396 [2024-11-26 18:25:53.806223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:19.396 [2024-11-26 18:25:53.806235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:19.396 [2024-11-26 18:25:53.806245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:19.396 [2024-11-26 18:25:53.806259] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:19.396 [2024-11-26 18:25:53.806272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:19.396 [2024-11-26 18:25:53.806289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:19.396 [2024-11-26 18:25:53.806301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:19.396 [2024-11-26 18:25:53.806315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:19.396 [2024-11-26 18:25:53.806326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:19.396 [2024-11-26 18:25:53.806339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:19.396 [2024-11-26 18:25:53.806350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:19.396 [2024-11-26 18:25:53.806363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:19.396 [2024-11-26 18:25:53.806373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:19.396 [2024-11-26 18:25:53.806386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:19.396 [2024-11-26 18:25:53.806397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:19.396 [2024-11-26 18:25:53.806410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:19.396 [2024-11-26 18:25:53.806420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:19.396 [2024-11-26 18:25:53.806462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:19.396 [2024-11-26 18:25:53.806475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:19.396 [2024-11-26 18:25:53.806488] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:19.396 [2024-11-26 18:25:53.806500] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:19.396 [2024-11-26 18:25:53.806545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:19.396 [2024-11-26 18:25:53.806558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:19.396 [2024-11-26 18:25:53.806583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:19.396 [2024-11-26 18:25:53.806604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:19.396 [2024-11-26 18:25:53.806620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.396 [2024-11-26 18:25:53.806632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:19.396 [2024-11-26 18:25:53.806646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.269 ms 00:24:19.396 [2024-11-26 18:25:53.806660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.396 [2024-11-26 18:25:53.848076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.396 [2024-11-26 18:25:53.848159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:19.396 [2024-11-26 18:25:53.848188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.303 ms 00:24:19.396 [2024-11-26 18:25:53.848210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.396 [2024-11-26 18:25:53.848429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.396 [2024-11-26 18:25:53.848450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:19.396 [2024-11-26 18:25:53.848471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:24:19.396 [2024-11-26 18:25:53.848485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.655 [2024-11-26 18:25:53.894784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.655 [2024-11-26 18:25:53.894859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:19.655 [2024-11-26 18:25:53.894895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.260 ms 00:24:19.655 [2024-11-26 18:25:53.894907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.655 [2024-11-26 18:25:53.895057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.655 [2024-11-26 18:25:53.895076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:19.655 [2024-11-26 18:25:53.895091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:19.655 [2024-11-26 18:25:53.895102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.655 [2024-11-26 18:25:53.895827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.655 [2024-11-26 18:25:53.895851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:19.655 [2024-11-26 18:25:53.895874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:24:19.655 [2024-11-26 18:25:53.895888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.655 [2024-11-26 18:25:53.896144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.655 [2024-11-26 18:25:53.896179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:19.655 [2024-11-26 18:25:53.896198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:24:19.655 [2024-11-26 18:25:53.896211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.655 [2024-11-26 18:25:53.918405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.655 [2024-11-26 18:25:53.918463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:19.655 [2024-11-26 18:25:53.918487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.153 ms 00:24:19.655 [2024-11-26 18:25:53.918500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.655 [2024-11-26 18:25:53.948825] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:19.655 [2024-11-26 18:25:53.948881] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:19.655 [2024-11-26 18:25:53.948923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.655 [2024-11-26 18:25:53.948936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:19.655 [2024-11-26 18:25:53.948955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.175 ms 00:24:19.655 [2024-11-26 18:25:53.948982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.655 [2024-11-26 18:25:53.976497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.655 [2024-11-26 18:25:53.976552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:19.655 [2024-11-26 18:25:53.976585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.405 ms 00:24:19.655 [2024-11-26 18:25:53.976601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.655 [2024-11-26 18:25:53.991519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.655 [2024-11-26 18:25:53.991597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:19.655 [2024-11-26 18:25:53.991628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.810 ms 00:24:19.655 [2024-11-26 18:25:53.991641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.655 [2024-11-26 18:25:54.006238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.655 [2024-11-26 18:25:54.006289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:19.655 [2024-11-26 18:25:54.006326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.499 ms 00:24:19.655 [2024-11-26 18:25:54.006338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.655 [2024-11-26 18:25:54.007374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.655 [2024-11-26 18:25:54.007429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:19.655 [2024-11-26 18:25:54.007452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:24:19.655 [2024-11-26 18:25:54.007466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.655 [2024-11-26 18:25:54.079662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.655 [2024-11-26 18:25:54.079765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:19.655 [2024-11-26 18:25:54.079806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.156 ms 00:24:19.655 [2024-11-26 18:25:54.079819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.655 [2024-11-26 18:25:54.090882] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:19.655 [2024-11-26 18:25:54.109797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.655 [2024-11-26 18:25:54.109919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:19.655 [2024-11-26 18:25:54.109940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.827 ms 00:24:19.655 [2024-11-26 18:25:54.109958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.655 [2024-11-26 18:25:54.110121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.655 [2024-11-26 18:25:54.110147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:19.655 [2024-11-26 18:25:54.110185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:19.655 [2024-11-26 18:25:54.110218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.655 [2024-11-26 18:25:54.110304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.655 [2024-11-26 18:25:54.110330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:19.655 [2024-11-26 18:25:54.110345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:19.655 [2024-11-26 18:25:54.110371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.655 [2024-11-26 18:25:54.110408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.655 [2024-11-26 18:25:54.110434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:19.655 [2024-11-26 18:25:54.110448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:19.655 [2024-11-26 18:25:54.110465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.655 [2024-11-26 18:25:54.110553] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:19.655 [2024-11-26 18:25:54.110626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.655 [2024-11-26 18:25:54.110650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:19.655 [2024-11-26 18:25:54.110671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:19.655 [2024-11-26 18:25:54.110689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.913 [2024-11-26 18:25:54.139680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.913 [2024-11-26 18:25:54.139743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:19.913 [2024-11-26 18:25:54.139768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.943 ms 00:24:19.913 [2024-11-26 18:25:54.139781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.913 [2024-11-26 18:25:54.139942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.913 [2024-11-26 18:25:54.139994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:19.913 [2024-11-26 18:25:54.140040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:24:19.913 [2024-11-26 18:25:54.140054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.914 [2024-11-26 18:25:54.141454] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:19.914 [2024-11-26 18:25:54.145386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 374.165 ms, result 0 00:24:19.914 [2024-11-26 18:25:54.146709] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:19.914 Some configs were skipped because the RPC state that can call them passed over. 00:24:19.914 18:25:54 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:24:20.172 [2024-11-26 18:25:54.463658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.172 [2024-11-26 18:25:54.463767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:20.172 [2024-11-26 18:25:54.463789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.768 ms 00:24:20.172 [2024-11-26 18:25:54.463809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.172 [2024-11-26 18:25:54.463879] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.987 ms, result 0 00:24:20.172 true 00:24:20.172 18:25:54 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:24:20.430 [2024-11-26 18:25:54.699634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.430 [2024-11-26 18:25:54.699685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:20.430 [2024-11-26 18:25:54.699712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:24:20.430 [2024-11-26 18:25:54.699726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.430 [2024-11-26 18:25:54.699791] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.601 ms, result 0 00:24:20.430 true 00:24:20.430 18:25:54 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78964 00:24:20.430 18:25:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78964 ']' 00:24:20.430 18:25:54 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78964 00:24:20.430 18:25:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:24:20.430 18:25:54 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.430 18:25:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78964 00:24:20.430 18:25:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:20.430 18:25:54 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:20.430 killing process with pid 78964 00:24:20.430 18:25:54 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78964' 00:24:20.430 18:25:54 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78964 00:24:20.430 18:25:54 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78964 00:24:21.366 [2024-11-26 18:25:55.676160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.366 [2024-11-26 18:25:55.676245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:21.366 [2024-11-26 18:25:55.676269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:21.366 [2024-11-26 18:25:55.676285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.366 [2024-11-26 18:25:55.676321] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:21.366 [2024-11-26 18:25:55.680087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.366 [2024-11-26 18:25:55.680142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:21.366 [2024-11-26 18:25:55.680164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.738 ms 00:24:21.366 [2024-11-26 18:25:55.680176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.366 [2024-11-26 18:25:55.680488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.366 [2024-11-26 18:25:55.680517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:21.366 [2024-11-26 18:25:55.680534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:24:21.366 [2024-11-26 18:25:55.680546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.366 [2024-11-26 18:25:55.684703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.366 [2024-11-26 18:25:55.684751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:21.366 [2024-11-26 18:25:55.684769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.113 ms 00:24:21.366 [2024-11-26 18:25:55.684782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.366 [2024-11-26 18:25:55.692108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.366 [2024-11-26 18:25:55.692159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:21.366 [2024-11-26 18:25:55.692191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.275 ms 00:24:21.366 [2024-11-26 18:25:55.692202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.367 [2024-11-26 18:25:55.704649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.367 [2024-11-26 18:25:55.704716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:21.367 [2024-11-26 18:25:55.704754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.384 ms 00:24:21.367 [2024-11-26 18:25:55.704765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.367 [2024-11-26 18:25:55.713579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.367 [2024-11-26 18:25:55.713661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:21.367 [2024-11-26 18:25:55.713681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.749 ms 00:24:21.367 [2024-11-26 18:25:55.713692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.367 [2024-11-26 18:25:55.713853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.367 [2024-11-26 18:25:55.713873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:21.367 [2024-11-26 18:25:55.713889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:24:21.367 [2024-11-26 18:25:55.713900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.367 [2024-11-26 18:25:55.727087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.367 [2024-11-26 18:25:55.727145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:21.367 [2024-11-26 18:25:55.727180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.156 ms 00:24:21.367 [2024-11-26 18:25:55.727192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.367 [2024-11-26 18:25:55.740189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.367 [2024-11-26 18:25:55.740245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:21.367 [2024-11-26 18:25:55.740282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.947 ms 00:24:21.367 [2024-11-26 18:25:55.740292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.367 [2024-11-26 18:25:55.752500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.367 [2024-11-26 18:25:55.752579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:21.367 [2024-11-26 18:25:55.752616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.158 ms 00:24:21.367 [2024-11-26 18:25:55.752628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.367 [2024-11-26 18:25:55.764500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.367 [2024-11-26 18:25:55.764581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:21.367 [2024-11-26 18:25:55.764601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.789 ms 00:24:21.367 [2024-11-26 18:25:55.764611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.367 [2024-11-26 18:25:55.764658] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:21.367 [2024-11-26 18:25:55.764681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.764990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:21.367 [2024-11-26 18:25:55.765561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.765989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.766002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.766019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.766031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.766046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.766059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.766073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:21.368 [2024-11-26 18:25:55.766106] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:21.368 [2024-11-26 18:25:55.766124] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e417d21-16e8-4060-8f5a-5ce9752d454b 00:24:21.368 [2024-11-26 18:25:55.766140] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:21.368 [2024-11-26 18:25:55.766154] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:21.368 [2024-11-26 18:25:55.766166] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:21.368 [2024-11-26 18:25:55.766180] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:21.368 [2024-11-26 18:25:55.766191] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:21.368 [2024-11-26 18:25:55.766206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:21.368 [2024-11-26 18:25:55.766217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:21.368 [2024-11-26 18:25:55.766230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:21.368 [2024-11-26 18:25:55.766240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:21.368 [2024-11-26 18:25:55.766254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.368 [2024-11-26 18:25:55.766266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:21.368 [2024-11-26 18:25:55.766281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.600 ms 00:24:21.368 [2024-11-26 18:25:55.766296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.368 [2024-11-26 18:25:55.782339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.368 [2024-11-26 18:25:55.782396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:21.368 [2024-11-26 18:25:55.782418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.997 ms 00:24:21.368 [2024-11-26 18:25:55.782431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.368 [2024-11-26 18:25:55.783024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.368 [2024-11-26 18:25:55.783090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:21.368 [2024-11-26 18:25:55.783113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:24:21.368 [2024-11-26 18:25:55.783126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.626 [2024-11-26 18:25:55.837468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.626 [2024-11-26 18:25:55.837539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:21.626 [2024-11-26 18:25:55.837611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.626 [2024-11-26 18:25:55.837627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.626 [2024-11-26 18:25:55.837758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.626 [2024-11-26 18:25:55.837776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:21.626 [2024-11-26 18:25:55.837811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.626 [2024-11-26 18:25:55.837822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.626 [2024-11-26 18:25:55.837900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.626 [2024-11-26 18:25:55.837921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:21.626 [2024-11-26 18:25:55.837941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.626 [2024-11-26 18:25:55.837952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.627 [2024-11-26 18:25:55.837983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.627 [2024-11-26 18:25:55.837997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:21.627 [2024-11-26 18:25:55.838011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.627 [2024-11-26 18:25:55.838026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.627 [2024-11-26 18:25:55.934686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.627 [2024-11-26 18:25:55.934777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:21.627 [2024-11-26 18:25:55.934818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.627 [2024-11-26 18:25:55.934832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.627 [2024-11-26 18:25:56.010843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.627 [2024-11-26 18:25:56.010930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:21.627 [2024-11-26 18:25:56.010983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.627 [2024-11-26 18:25:56.011005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.627 [2024-11-26 18:25:56.011104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.627 [2024-11-26 18:25:56.011122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:21.627 [2024-11-26 18:25:56.011140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.627 [2024-11-26 18:25:56.011151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.627 [2024-11-26 18:25:56.011205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.627 [2024-11-26 18:25:56.011235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:21.627 [2024-11-26 18:25:56.011250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.627 [2024-11-26 18:25:56.011262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.627 [2024-11-26 18:25:56.011414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.627 [2024-11-26 18:25:56.011433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:21.627 [2024-11-26 18:25:56.011447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.627 [2024-11-26 18:25:56.011459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.627 [2024-11-26 18:25:56.011517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.627 [2024-11-26 18:25:56.011572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:21.627 [2024-11-26 18:25:56.011609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.627 [2024-11-26 18:25:56.011622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.627 [2024-11-26 18:25:56.011679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.627 [2024-11-26 18:25:56.011695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:21.627 [2024-11-26 18:25:56.011712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.627 [2024-11-26 18:25:56.011724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.627 [2024-11-26 18:25:56.011787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:21.627 [2024-11-26 18:25:56.011804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:21.627 [2024-11-26 18:25:56.011819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:21.627 [2024-11-26 18:25:56.011830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.627 [2024-11-26 18:25:56.012041] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.850 ms, result 0 00:24:22.562 18:25:56 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:22.562 [2024-11-26 18:25:56.990739] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:24:22.562 [2024-11-26 18:25:56.990973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79024 ] 00:24:22.821 [2024-11-26 18:25:57.174240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:23.080 [2024-11-26 18:25:57.285762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:23.339 [2024-11-26 18:25:57.628670] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:23.339 [2024-11-26 18:25:57.628789] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:23.339 [2024-11-26 18:25:57.793989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.339 [2024-11-26 18:25:57.794051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:23.339 [2024-11-26 18:25:57.794074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:23.339 [2024-11-26 18:25:57.794088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.339 [2024-11-26 18:25:57.797859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.339 [2024-11-26 18:25:57.797905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:23.339 [2024-11-26 18:25:57.797922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.741 ms 00:24:23.339 [2024-11-26 18:25:57.797935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.339 [2024-11-26 18:25:57.798109] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:23.599 [2024-11-26 18:25:57.799139] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:23.599 [2024-11-26 18:25:57.799184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.599 [2024-11-26 18:25:57.799200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:23.599 [2024-11-26 18:25:57.799213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:24:23.599 [2024-11-26 18:25:57.799226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.599 [2024-11-26 18:25:57.801350] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:23.599 [2024-11-26 18:25:57.818802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.599 [2024-11-26 18:25:57.818988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:23.599 [2024-11-26 18:25:57.819019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.454 ms 00:24:23.599 [2024-11-26 18:25:57.819033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.599 [2024-11-26 18:25:57.819164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.599 [2024-11-26 18:25:57.819188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:23.599 [2024-11-26 18:25:57.819204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:23.599 [2024-11-26 18:25:57.819216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.599 [2024-11-26 18:25:57.829106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.599 [2024-11-26 18:25:57.829287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:23.599 [2024-11-26 18:25:57.829331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.829 ms 00:24:23.599 [2024-11-26 18:25:57.829344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.599 [2024-11-26 18:25:57.829526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.599 [2024-11-26 18:25:57.829549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:23.599 [2024-11-26 18:25:57.829563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:24:23.599 [2024-11-26 18:25:57.829574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.599 [2024-11-26 18:25:57.829705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.599 [2024-11-26 18:25:57.829725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:23.599 [2024-11-26 18:25:57.829739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:23.599 [2024-11-26 18:25:57.829751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.599 [2024-11-26 18:25:57.829784] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:23.599 [2024-11-26 18:25:57.835204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.599 [2024-11-26 18:25:57.835245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:23.599 [2024-11-26 18:25:57.835263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.430 ms 00:24:23.599 [2024-11-26 18:25:57.835275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.599 [2024-11-26 18:25:57.835385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.599 [2024-11-26 18:25:57.835405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:23.599 [2024-11-26 18:25:57.835417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:23.599 [2024-11-26 18:25:57.835428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.599 [2024-11-26 18:25:57.835466] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:23.599 [2024-11-26 18:25:57.835494] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:23.599 [2024-11-26 18:25:57.835534] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:23.599 [2024-11-26 18:25:57.835553] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:23.599 [2024-11-26 18:25:57.835669] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:23.599 [2024-11-26 18:25:57.835688] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:23.599 [2024-11-26 18:25:57.835703] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:23.599 [2024-11-26 18:25:57.835722] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:23.599 [2024-11-26 18:25:57.835735] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:23.599 [2024-11-26 18:25:57.835748] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:23.599 [2024-11-26 18:25:57.835759] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:23.599 [2024-11-26 18:25:57.835770] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:23.599 [2024-11-26 18:25:57.835781] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:23.599 [2024-11-26 18:25:57.835793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.599 [2024-11-26 18:25:57.835805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:23.599 [2024-11-26 18:25:57.835816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:24:23.599 [2024-11-26 18:25:57.835827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.599 [2024-11-26 18:25:57.835920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.599 [2024-11-26 18:25:57.835941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:23.599 [2024-11-26 18:25:57.835985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:24:23.599 [2024-11-26 18:25:57.835997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.599 [2024-11-26 18:25:57.836116] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:23.599 [2024-11-26 18:25:57.836135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:23.599 [2024-11-26 18:25:57.836149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:23.599 [2024-11-26 18:25:57.836161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:23.599 [2024-11-26 18:25:57.836173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:23.599 [2024-11-26 18:25:57.836184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:23.599 [2024-11-26 18:25:57.836196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:23.599 [2024-11-26 18:25:57.836209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:23.599 [2024-11-26 18:25:57.836220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:23.599 [2024-11-26 18:25:57.836232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:23.599 [2024-11-26 18:25:57.836244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:23.599 [2024-11-26 18:25:57.836270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:23.599 [2024-11-26 18:25:57.836281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:23.599 [2024-11-26 18:25:57.836292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:23.599 [2024-11-26 18:25:57.836304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:23.599 [2024-11-26 18:25:57.836315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:23.599 [2024-11-26 18:25:57.836341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:23.599 [2024-11-26 18:25:57.836367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:23.599 [2024-11-26 18:25:57.836380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:23.599 [2024-11-26 18:25:57.836391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:23.599 [2024-11-26 18:25:57.836402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:23.599 [2024-11-26 18:25:57.836413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:23.599 [2024-11-26 18:25:57.836424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:23.599 [2024-11-26 18:25:57.836434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:23.599 [2024-11-26 18:25:57.836445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:23.599 [2024-11-26 18:25:57.836455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:23.599 [2024-11-26 18:25:57.836465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:23.599 [2024-11-26 18:25:57.836475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:23.599 [2024-11-26 18:25:57.836486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:23.599 [2024-11-26 18:25:57.836497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:23.599 [2024-11-26 18:25:57.836507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:23.599 [2024-11-26 18:25:57.836518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:23.599 [2024-11-26 18:25:57.836528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:23.599 [2024-11-26 18:25:57.836539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:23.599 [2024-11-26 18:25:57.836550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:23.599 [2024-11-26 18:25:57.836560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:23.599 [2024-11-26 18:25:57.836570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:23.599 [2024-11-26 18:25:57.836580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:23.599 [2024-11-26 18:25:57.836591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:23.599 [2024-11-26 18:25:57.836601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:23.599 [2024-11-26 18:25:57.836959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:23.600 [2024-11-26 18:25:57.837022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:23.600 [2024-11-26 18:25:57.837073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:23.600 [2024-11-26 18:25:57.837114] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:23.600 [2024-11-26 18:25:57.837235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:23.600 [2024-11-26 18:25:57.837297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:23.600 [2024-11-26 18:25:57.837353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:23.600 [2024-11-26 18:25:57.837486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:23.600 [2024-11-26 18:25:57.837539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:23.600 [2024-11-26 18:25:57.837686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:23.600 [2024-11-26 18:25:57.837741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:23.600 [2024-11-26 18:25:57.837845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:23.600 [2024-11-26 18:25:57.837898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:23.600 [2024-11-26 18:25:57.838000] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:23.600 [2024-11-26 18:25:57.838071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:23.600 [2024-11-26 18:25:57.838201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:23.600 [2024-11-26 18:25:57.838265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:23.600 [2024-11-26 18:25:57.838422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:23.600 [2024-11-26 18:25:57.838607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:23.600 [2024-11-26 18:25:57.838663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:23.600 [2024-11-26 18:25:57.838704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:23.600 [2024-11-26 18:25:57.838718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:23.600 [2024-11-26 18:25:57.838730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:23.600 [2024-11-26 18:25:57.838742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:23.600 [2024-11-26 18:25:57.838755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:23.600 [2024-11-26 18:25:57.838767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:23.600 [2024-11-26 18:25:57.838779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:23.600 [2024-11-26 18:25:57.838791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:23.600 [2024-11-26 18:25:57.838803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:23.600 [2024-11-26 18:25:57.838815] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:23.600 [2024-11-26 18:25:57.838829] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:23.600 [2024-11-26 18:25:57.838842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:23.600 [2024-11-26 18:25:57.838855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:23.600 [2024-11-26 18:25:57.838867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:23.600 [2024-11-26 18:25:57.838879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:23.600 [2024-11-26 18:25:57.838894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.600 [2024-11-26 18:25:57.838916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:23.600 [2024-11-26 18:25:57.838929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.846 ms 00:24:23.600 [2024-11-26 18:25:57.838941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.600 [2024-11-26 18:25:57.878224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.600 [2024-11-26 18:25:57.878295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:23.600 [2024-11-26 18:25:57.878333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.192 ms 00:24:23.600 [2024-11-26 18:25:57.878360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.600 [2024-11-26 18:25:57.878622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.600 [2024-11-26 18:25:57.878646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:23.600 [2024-11-26 18:25:57.878660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:24:23.600 [2024-11-26 18:25:57.878671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.600 [2024-11-26 18:25:57.928018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.600 [2024-11-26 18:25:57.928083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:23.600 [2024-11-26 18:25:57.928124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.297 ms 00:24:23.600 [2024-11-26 18:25:57.928136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.600 [2024-11-26 18:25:57.928282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.600 [2024-11-26 18:25:57.928303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:23.600 [2024-11-26 18:25:57.928317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:23.600 [2024-11-26 18:25:57.928329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.600 [2024-11-26 18:25:57.928955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.600 [2024-11-26 18:25:57.929015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:23.600 [2024-11-26 18:25:57.929055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:24:23.600 [2024-11-26 18:25:57.929067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.600 [2024-11-26 18:25:57.929244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.600 [2024-11-26 18:25:57.929276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:23.600 [2024-11-26 18:25:57.929290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:24:23.600 [2024-11-26 18:25:57.929301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.600 [2024-11-26 18:25:57.948350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.600 [2024-11-26 18:25:57.948596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:23.600 [2024-11-26 18:25:57.948744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.016 ms 00:24:23.600 [2024-11-26 18:25:57.948769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.600 [2024-11-26 18:25:57.966086] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:23.600 [2024-11-26 18:25:57.966133] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:23.600 [2024-11-26 18:25:57.966152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.600 [2024-11-26 18:25:57.966165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:23.600 [2024-11-26 18:25:57.966178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.237 ms 00:24:23.600 [2024-11-26 18:25:57.966189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.600 [2024-11-26 18:25:57.993232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.600 [2024-11-26 18:25:57.993275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:23.600 [2024-11-26 18:25:57.993307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.948 ms 00:24:23.600 [2024-11-26 18:25:57.993319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.600 [2024-11-26 18:25:58.007470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.600 [2024-11-26 18:25:58.007707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:23.600 [2024-11-26 18:25:58.007736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.060 ms 00:24:23.600 [2024-11-26 18:25:58.007748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.600 [2024-11-26 18:25:58.021455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.600 [2024-11-26 18:25:58.021497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:23.600 [2024-11-26 18:25:58.021529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.610 ms 00:24:23.600 [2024-11-26 18:25:58.021539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.600 [2024-11-26 18:25:58.022427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.600 [2024-11-26 18:25:58.022464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:23.600 [2024-11-26 18:25:58.022481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:24:23.600 [2024-11-26 18:25:58.022492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.893 [2024-11-26 18:25:58.094362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.893 [2024-11-26 18:25:58.094447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:23.893 [2024-11-26 18:25:58.094485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.810 ms 00:24:23.893 [2024-11-26 18:25:58.094497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.893 [2024-11-26 18:25:58.105674] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:23.893 [2024-11-26 18:25:58.126475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.893 [2024-11-26 18:25:58.126592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:23.893 [2024-11-26 18:25:58.126618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.737 ms 00:24:23.893 [2024-11-26 18:25:58.126640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.893 [2024-11-26 18:25:58.126788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.893 [2024-11-26 18:25:58.126812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:23.893 [2024-11-26 18:25:58.126827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:23.893 [2024-11-26 18:25:58.126840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.893 [2024-11-26 18:25:58.126918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.894 [2024-11-26 18:25:58.126937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:23.894 [2024-11-26 18:25:58.126951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:23.894 [2024-11-26 18:25:58.126969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.894 [2024-11-26 18:25:58.127019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.894 [2024-11-26 18:25:58.127039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:23.894 [2024-11-26 18:25:58.127052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:23.894 [2024-11-26 18:25:58.127064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.894 [2024-11-26 18:25:58.127116] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:23.894 [2024-11-26 18:25:58.127135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.894 [2024-11-26 18:25:58.127147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:23.894 [2024-11-26 18:25:58.127160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:23.894 [2024-11-26 18:25:58.127186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.894 [2024-11-26 18:25:58.158252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.894 [2024-11-26 18:25:58.158298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:23.894 [2024-11-26 18:25:58.158333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.033 ms 00:24:23.894 [2024-11-26 18:25:58.158344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.894 [2024-11-26 18:25:58.158490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:23.894 [2024-11-26 18:25:58.158539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:23.894 [2024-11-26 18:25:58.158563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:24:23.894 [2024-11-26 18:25:58.158597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:23.894 [2024-11-26 18:25:58.160059] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:23.894 [2024-11-26 18:25:58.163989] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 365.635 ms, result 0 00:24:23.894 [2024-11-26 18:25:58.164954] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:23.894 [2024-11-26 18:25:58.180447] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:24.828  [2024-11-26T18:26:00.664Z] Copying: 24/256 [MB] (24 MBps) [2024-11-26T18:26:01.599Z] Copying: 45/256 [MB] (20 MBps) [2024-11-26T18:26:02.534Z] Copying: 66/256 [MB] (20 MBps) [2024-11-26T18:26:03.469Z] Copying: 86/256 [MB] (20 MBps) [2024-11-26T18:26:04.402Z] Copying: 107/256 [MB] (20 MBps) [2024-11-26T18:26:05.336Z] Copying: 127/256 [MB] (20 MBps) [2024-11-26T18:26:06.269Z] Copying: 148/256 [MB] (21 MBps) [2024-11-26T18:26:07.643Z] Copying: 169/256 [MB] (20 MBps) [2024-11-26T18:26:08.577Z] Copying: 189/256 [MB] (20 MBps) [2024-11-26T18:26:09.519Z] Copying: 209/256 [MB] (20 MBps) [2024-11-26T18:26:10.464Z] Copying: 230/256 [MB] (20 MBps) [2024-11-26T18:26:10.724Z] Copying: 250/256 [MB] (20 MBps) [2024-11-26T18:26:10.724Z] Copying: 256/256 [MB] (average 20 MBps)[2024-11-26 18:26:10.521719] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:36.263 [2024-11-26 18:26:10.533005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.263 [2024-11-26 18:26:10.533046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:36.263 [2024-11-26 18:26:10.533089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:36.263 [2024-11-26 18:26:10.533100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.263 [2024-11-26 18:26:10.533128] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:36.263 [2024-11-26 18:26:10.536527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.263 [2024-11-26 18:26:10.536583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:36.263 [2024-11-26 18:26:10.536613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.379 ms 00:24:36.264 [2024-11-26 18:26:10.536623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.264 [2024-11-26 18:26:10.536909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.264 [2024-11-26 18:26:10.536927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:36.264 [2024-11-26 18:26:10.536939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:24:36.264 [2024-11-26 18:26:10.536950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.264 [2024-11-26 18:26:10.540519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.264 [2024-11-26 18:26:10.540573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:36.264 [2024-11-26 18:26:10.540587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.513 ms 00:24:36.264 [2024-11-26 18:26:10.540598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.264 [2024-11-26 18:26:10.546702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.264 [2024-11-26 18:26:10.546734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:36.264 [2024-11-26 18:26:10.546764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.082 ms 00:24:36.264 [2024-11-26 18:26:10.546774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.264 [2024-11-26 18:26:10.572864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.264 [2024-11-26 18:26:10.572906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:36.264 [2024-11-26 18:26:10.572938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.991 ms 00:24:36.264 [2024-11-26 18:26:10.572948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.264 [2024-11-26 18:26:10.588718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.264 [2024-11-26 18:26:10.588760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:36.264 [2024-11-26 18:26:10.588796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.711 ms 00:24:36.264 [2024-11-26 18:26:10.588807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.264 [2024-11-26 18:26:10.588943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.264 [2024-11-26 18:26:10.588977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:36.264 [2024-11-26 18:26:10.589004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:24:36.264 [2024-11-26 18:26:10.589015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.264 [2024-11-26 18:26:10.619824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.264 [2024-11-26 18:26:10.619876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:36.264 [2024-11-26 18:26:10.619908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.786 ms 00:24:36.264 [2024-11-26 18:26:10.619919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.264 [2024-11-26 18:26:10.645850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.264 [2024-11-26 18:26:10.645889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:36.264 [2024-11-26 18:26:10.645921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.837 ms 00:24:36.264 [2024-11-26 18:26:10.645931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.264 [2024-11-26 18:26:10.672894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.264 [2024-11-26 18:26:10.672950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:36.264 [2024-11-26 18:26:10.672997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.910 ms 00:24:36.264 [2024-11-26 18:26:10.673009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.264 [2024-11-26 18:26:10.703828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.264 [2024-11-26 18:26:10.703870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:36.264 [2024-11-26 18:26:10.703903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.698 ms 00:24:36.264 [2024-11-26 18:26:10.703914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.264 [2024-11-26 18:26:10.704010] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:36.264 [2024-11-26 18:26:10.704036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:36.264 [2024-11-26 18:26:10.704695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.704990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:36.265 [2024-11-26 18:26:10.705447] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:36.265 [2024-11-26 18:26:10.705459] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2e417d21-16e8-4060-8f5a-5ce9752d454b 00:24:36.265 [2024-11-26 18:26:10.705471] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:36.265 [2024-11-26 18:26:10.705483] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:36.265 [2024-11-26 18:26:10.705496] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:36.265 [2024-11-26 18:26:10.705508] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:36.265 [2024-11-26 18:26:10.705519] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:36.265 [2024-11-26 18:26:10.705530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:36.265 [2024-11-26 18:26:10.705547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:36.265 [2024-11-26 18:26:10.705557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:36.265 [2024-11-26 18:26:10.705568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:36.265 [2024-11-26 18:26:10.705579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.265 [2024-11-26 18:26:10.705591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:36.265 [2024-11-26 18:26:10.705603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.571 ms 00:24:36.265 [2024-11-26 18:26:10.705614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.524 [2024-11-26 18:26:10.723251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.524 [2024-11-26 18:26:10.723467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:36.524 [2024-11-26 18:26:10.723496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.609 ms 00:24:36.524 [2024-11-26 18:26:10.723510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.524 [2024-11-26 18:26:10.724088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:36.524 [2024-11-26 18:26:10.724109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:36.524 [2024-11-26 18:26:10.724124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:24:36.524 [2024-11-26 18:26:10.724134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.524 [2024-11-26 18:26:10.774405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.524 [2024-11-26 18:26:10.774470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:36.524 [2024-11-26 18:26:10.774505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.524 [2024-11-26 18:26:10.774558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.524 [2024-11-26 18:26:10.774723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.524 [2024-11-26 18:26:10.774745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:36.524 [2024-11-26 18:26:10.774759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.524 [2024-11-26 18:26:10.774772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.524 [2024-11-26 18:26:10.774843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.524 [2024-11-26 18:26:10.774864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:36.525 [2024-11-26 18:26:10.774877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.525 [2024-11-26 18:26:10.774889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.525 [2024-11-26 18:26:10.774923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.525 [2024-11-26 18:26:10.774940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:36.525 [2024-11-26 18:26:10.774953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.525 [2024-11-26 18:26:10.774965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.525 [2024-11-26 18:26:10.865204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.525 [2024-11-26 18:26:10.865276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:36.525 [2024-11-26 18:26:10.865310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.525 [2024-11-26 18:26:10.865321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.525 [2024-11-26 18:26:10.937856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.525 [2024-11-26 18:26:10.937914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:36.525 [2024-11-26 18:26:10.937932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.525 [2024-11-26 18:26:10.937959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.525 [2024-11-26 18:26:10.938055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.525 [2024-11-26 18:26:10.938073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:36.525 [2024-11-26 18:26:10.938086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.525 [2024-11-26 18:26:10.938097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.525 [2024-11-26 18:26:10.938136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.525 [2024-11-26 18:26:10.938159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:36.525 [2024-11-26 18:26:10.938171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.525 [2024-11-26 18:26:10.938183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.525 [2024-11-26 18:26:10.938310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.525 [2024-11-26 18:26:10.938346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:36.525 [2024-11-26 18:26:10.938359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.525 [2024-11-26 18:26:10.938370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.525 [2024-11-26 18:26:10.938422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.525 [2024-11-26 18:26:10.938441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:36.525 [2024-11-26 18:26:10.938459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.525 [2024-11-26 18:26:10.938470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.525 [2024-11-26 18:26:10.938544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.525 [2024-11-26 18:26:10.938561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:36.525 [2024-11-26 18:26:10.938623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.525 [2024-11-26 18:26:10.938638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.525 [2024-11-26 18:26:10.938705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:36.525 [2024-11-26 18:26:10.938728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:36.525 [2024-11-26 18:26:10.938741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:36.525 [2024-11-26 18:26:10.938753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:36.525 [2024-11-26 18:26:10.938939] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 405.917 ms, result 0 00:24:37.458 00:24:37.458 00:24:37.458 18:26:11 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:38.024 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:24:38.024 18:26:12 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:24:38.024 18:26:12 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:24:38.024 18:26:12 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:38.024 18:26:12 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:38.024 18:26:12 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:24:38.024 18:26:12 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:38.024 Process with pid 78964 is not found 00:24:38.024 18:26:12 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78964 00:24:38.024 18:26:12 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78964 ']' 00:24:38.024 18:26:12 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78964 00:24:38.024 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78964) - No such process 00:24:38.024 18:26:12 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78964 is not found' 00:24:38.024 ************************************ 00:24:38.024 END TEST ftl_trim 00:24:38.024 ************************************ 00:24:38.024 00:24:38.024 real 1m15.254s 00:24:38.024 user 1m43.377s 00:24:38.024 sys 0m8.008s 00:24:38.024 18:26:12 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:38.024 18:26:12 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:38.283 18:26:12 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:24:38.283 18:26:12 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:38.283 18:26:12 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.283 18:26:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:38.283 ************************************ 00:24:38.283 START TEST ftl_restore 00:24:38.283 ************************************ 00:24:38.283 18:26:12 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:24:38.283 * Looking for test storage... 00:24:38.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:38.283 18:26:12 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:38.283 18:26:12 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:24:38.283 18:26:12 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:38.283 18:26:12 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:38.283 18:26:12 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:38.283 18:26:12 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:38.283 18:26:12 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:38.283 18:26:12 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:24:38.283 18:26:12 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:24:38.283 18:26:12 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:24:38.283 18:26:12 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:24:38.283 18:26:12 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:24:38.283 18:26:12 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:38.284 18:26:12 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:24:38.284 18:26:12 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:38.284 18:26:12 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:38.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.284 --rc genhtml_branch_coverage=1 00:24:38.284 --rc genhtml_function_coverage=1 00:24:38.284 --rc genhtml_legend=1 00:24:38.284 --rc geninfo_all_blocks=1 00:24:38.284 --rc geninfo_unexecuted_blocks=1 00:24:38.284 00:24:38.284 ' 00:24:38.284 18:26:12 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:38.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.284 --rc genhtml_branch_coverage=1 00:24:38.284 --rc genhtml_function_coverage=1 00:24:38.284 --rc genhtml_legend=1 00:24:38.284 --rc geninfo_all_blocks=1 00:24:38.284 --rc geninfo_unexecuted_blocks=1 00:24:38.284 00:24:38.284 ' 00:24:38.284 18:26:12 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:38.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.284 --rc genhtml_branch_coverage=1 00:24:38.284 --rc genhtml_function_coverage=1 00:24:38.284 --rc genhtml_legend=1 00:24:38.284 --rc geninfo_all_blocks=1 00:24:38.284 --rc geninfo_unexecuted_blocks=1 00:24:38.284 00:24:38.284 ' 00:24:38.284 18:26:12 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:38.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:38.284 --rc genhtml_branch_coverage=1 00:24:38.284 --rc genhtml_function_coverage=1 00:24:38.284 --rc genhtml_legend=1 00:24:38.284 --rc geninfo_all_blocks=1 00:24:38.284 --rc geninfo_unexecuted_blocks=1 00:24:38.284 00:24:38.284 ' 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.CdOO5hsiiG 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79248 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:38.284 18:26:12 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79248 00:24:38.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.284 18:26:12 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79248 ']' 00:24:38.284 18:26:12 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.284 18:26:12 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.284 18:26:12 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.284 18:26:12 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.284 18:26:12 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:38.542 [2024-11-26 18:26:12.894824] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:24:38.542 [2024-11-26 18:26:12.895309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79248 ] 00:24:38.801 [2024-11-26 18:26:13.091006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.801 [2024-11-26 18:26:13.238451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.733 18:26:14 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.733 18:26:14 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:24:39.733 18:26:14 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:39.733 18:26:14 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:24:39.733 18:26:14 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:39.733 18:26:14 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:24:39.733 18:26:14 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:24:39.733 18:26:14 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:39.990 18:26:14 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:39.990 18:26:14 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:24:39.990 18:26:14 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:39.990 18:26:14 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:39.990 18:26:14 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:39.990 18:26:14 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:39.990 18:26:14 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:39.990 18:26:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:40.248 18:26:14 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:40.248 { 00:24:40.248 "name": "nvme0n1", 00:24:40.248 "aliases": [ 00:24:40.248 "279040cd-43bb-4457-a5e4-73ddd4bb286e" 00:24:40.248 ], 00:24:40.248 "product_name": "NVMe disk", 00:24:40.248 "block_size": 4096, 00:24:40.248 "num_blocks": 1310720, 00:24:40.248 "uuid": "279040cd-43bb-4457-a5e4-73ddd4bb286e", 00:24:40.248 "numa_id": -1, 00:24:40.248 "assigned_rate_limits": { 00:24:40.248 "rw_ios_per_sec": 0, 00:24:40.248 "rw_mbytes_per_sec": 0, 00:24:40.248 "r_mbytes_per_sec": 0, 00:24:40.248 "w_mbytes_per_sec": 0 00:24:40.248 }, 00:24:40.248 "claimed": true, 00:24:40.248 "claim_type": "read_many_write_one", 00:24:40.248 "zoned": false, 00:24:40.248 "supported_io_types": { 00:24:40.248 "read": true, 00:24:40.248 "write": true, 00:24:40.248 "unmap": true, 00:24:40.248 "flush": true, 00:24:40.248 "reset": true, 00:24:40.248 "nvme_admin": true, 00:24:40.248 "nvme_io": true, 00:24:40.248 "nvme_io_md": false, 00:24:40.248 "write_zeroes": true, 00:24:40.248 "zcopy": false, 00:24:40.248 "get_zone_info": false, 00:24:40.248 "zone_management": false, 00:24:40.248 "zone_append": false, 00:24:40.248 "compare": true, 00:24:40.248 "compare_and_write": false, 00:24:40.248 "abort": true, 00:24:40.248 "seek_hole": false, 00:24:40.248 "seek_data": false, 00:24:40.248 "copy": true, 00:24:40.248 "nvme_iov_md": false 00:24:40.248 }, 00:24:40.248 "driver_specific": { 00:24:40.248 "nvme": [ 00:24:40.248 { 00:24:40.248 "pci_address": "0000:00:11.0", 00:24:40.248 "trid": { 00:24:40.248 "trtype": "PCIe", 00:24:40.248 "traddr": "0000:00:11.0" 00:24:40.248 }, 00:24:40.248 "ctrlr_data": { 00:24:40.248 "cntlid": 0, 00:24:40.248 "vendor_id": "0x1b36", 00:24:40.248 "model_number": "QEMU NVMe Ctrl", 00:24:40.248 "serial_number": "12341", 00:24:40.248 "firmware_revision": "8.0.0", 00:24:40.248 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:40.248 "oacs": { 00:24:40.248 "security": 0, 00:24:40.248 "format": 1, 00:24:40.248 "firmware": 0, 00:24:40.248 "ns_manage": 1 00:24:40.248 }, 00:24:40.248 "multi_ctrlr": false, 00:24:40.248 "ana_reporting": false 00:24:40.248 }, 00:24:40.248 "vs": { 00:24:40.248 "nvme_version": "1.4" 00:24:40.248 }, 00:24:40.248 "ns_data": { 00:24:40.248 "id": 1, 00:24:40.248 "can_share": false 00:24:40.248 } 00:24:40.248 } 00:24:40.248 ], 00:24:40.248 "mp_policy": "active_passive" 00:24:40.248 } 00:24:40.248 } 00:24:40.248 ]' 00:24:40.248 18:26:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:40.506 18:26:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:40.506 18:26:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:40.506 18:26:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:40.506 18:26:14 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:40.506 18:26:14 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:24:40.506 18:26:14 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:24:40.506 18:26:14 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:40.506 18:26:14 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:24:40.506 18:26:14 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:40.506 18:26:14 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:40.764 18:26:15 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=3004a13b-69f0-4cea-b22f-38e0a33c9c89 00:24:40.764 18:26:15 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:24:40.764 18:26:15 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3004a13b-69f0-4cea-b22f-38e0a33c9c89 00:24:41.022 18:26:15 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:41.280 18:26:15 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=14594a72-31c6-4008-8481-4d20a12ccdec 00:24:41.280 18:26:15 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 14594a72-31c6-4008-8481-4d20a12ccdec 00:24:41.538 18:26:15 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=df73143c-123f-4ead-ba5b-fb7ca38e1516 00:24:41.538 18:26:15 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:24:41.538 18:26:15 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 df73143c-123f-4ead-ba5b-fb7ca38e1516 00:24:41.538 18:26:15 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:24:41.538 18:26:15 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:41.538 18:26:15 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=df73143c-123f-4ead-ba5b-fb7ca38e1516 00:24:41.538 18:26:15 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:24:41.538 18:26:15 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size df73143c-123f-4ead-ba5b-fb7ca38e1516 00:24:41.538 18:26:15 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=df73143c-123f-4ead-ba5b-fb7ca38e1516 00:24:41.538 18:26:15 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:41.538 18:26:15 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:41.538 18:26:15 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:41.538 18:26:15 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b df73143c-123f-4ead-ba5b-fb7ca38e1516 00:24:41.797 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:41.797 { 00:24:41.797 "name": "df73143c-123f-4ead-ba5b-fb7ca38e1516", 00:24:41.797 "aliases": [ 00:24:41.797 "lvs/nvme0n1p0" 00:24:41.797 ], 00:24:41.797 "product_name": "Logical Volume", 00:24:41.797 "block_size": 4096, 00:24:41.797 "num_blocks": 26476544, 00:24:41.797 "uuid": "df73143c-123f-4ead-ba5b-fb7ca38e1516", 00:24:41.797 "assigned_rate_limits": { 00:24:41.797 "rw_ios_per_sec": 0, 00:24:41.797 "rw_mbytes_per_sec": 0, 00:24:41.797 "r_mbytes_per_sec": 0, 00:24:41.797 "w_mbytes_per_sec": 0 00:24:41.797 }, 00:24:41.797 "claimed": false, 00:24:41.797 "zoned": false, 00:24:41.797 "supported_io_types": { 00:24:41.797 "read": true, 00:24:41.797 "write": true, 00:24:41.797 "unmap": true, 00:24:41.797 "flush": false, 00:24:41.797 "reset": true, 00:24:41.797 "nvme_admin": false, 00:24:41.797 "nvme_io": false, 00:24:41.797 "nvme_io_md": false, 00:24:41.797 "write_zeroes": true, 00:24:41.797 "zcopy": false, 00:24:41.797 "get_zone_info": false, 00:24:41.797 "zone_management": false, 00:24:41.797 "zone_append": false, 00:24:41.797 "compare": false, 00:24:41.797 "compare_and_write": false, 00:24:41.797 "abort": false, 00:24:41.797 "seek_hole": true, 00:24:41.797 "seek_data": true, 00:24:41.797 "copy": false, 00:24:41.797 "nvme_iov_md": false 00:24:41.797 }, 00:24:41.797 "driver_specific": { 00:24:41.797 "lvol": { 00:24:41.797 "lvol_store_uuid": "14594a72-31c6-4008-8481-4d20a12ccdec", 00:24:41.797 "base_bdev": "nvme0n1", 00:24:41.797 "thin_provision": true, 00:24:41.797 "num_allocated_clusters": 0, 00:24:41.797 "snapshot": false, 00:24:41.797 "clone": false, 00:24:41.797 "esnap_clone": false 00:24:41.797 } 00:24:41.797 } 00:24:41.797 } 00:24:41.797 ]' 00:24:41.797 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:41.797 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:41.797 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:42.056 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:42.056 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:42.056 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:42.056 18:26:16 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:24:42.056 18:26:16 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:24:42.056 18:26:16 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:42.314 18:26:16 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:42.314 18:26:16 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:42.314 18:26:16 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size df73143c-123f-4ead-ba5b-fb7ca38e1516 00:24:42.314 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=df73143c-123f-4ead-ba5b-fb7ca38e1516 00:24:42.314 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:42.314 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:42.314 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:42.314 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b df73143c-123f-4ead-ba5b-fb7ca38e1516 00:24:42.573 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:42.573 { 00:24:42.573 "name": "df73143c-123f-4ead-ba5b-fb7ca38e1516", 00:24:42.573 "aliases": [ 00:24:42.573 "lvs/nvme0n1p0" 00:24:42.573 ], 00:24:42.573 "product_name": "Logical Volume", 00:24:42.573 "block_size": 4096, 00:24:42.573 "num_blocks": 26476544, 00:24:42.573 "uuid": "df73143c-123f-4ead-ba5b-fb7ca38e1516", 00:24:42.573 "assigned_rate_limits": { 00:24:42.573 "rw_ios_per_sec": 0, 00:24:42.573 "rw_mbytes_per_sec": 0, 00:24:42.573 "r_mbytes_per_sec": 0, 00:24:42.573 "w_mbytes_per_sec": 0 00:24:42.573 }, 00:24:42.573 "claimed": false, 00:24:42.573 "zoned": false, 00:24:42.573 "supported_io_types": { 00:24:42.573 "read": true, 00:24:42.573 "write": true, 00:24:42.573 "unmap": true, 00:24:42.573 "flush": false, 00:24:42.573 "reset": true, 00:24:42.573 "nvme_admin": false, 00:24:42.573 "nvme_io": false, 00:24:42.573 "nvme_io_md": false, 00:24:42.573 "write_zeroes": true, 00:24:42.573 "zcopy": false, 00:24:42.573 "get_zone_info": false, 00:24:42.573 "zone_management": false, 00:24:42.573 "zone_append": false, 00:24:42.573 "compare": false, 00:24:42.573 "compare_and_write": false, 00:24:42.573 "abort": false, 00:24:42.573 "seek_hole": true, 00:24:42.573 "seek_data": true, 00:24:42.573 "copy": false, 00:24:42.573 "nvme_iov_md": false 00:24:42.573 }, 00:24:42.573 "driver_specific": { 00:24:42.573 "lvol": { 00:24:42.573 "lvol_store_uuid": "14594a72-31c6-4008-8481-4d20a12ccdec", 00:24:42.573 "base_bdev": "nvme0n1", 00:24:42.573 "thin_provision": true, 00:24:42.573 "num_allocated_clusters": 0, 00:24:42.573 "snapshot": false, 00:24:42.573 "clone": false, 00:24:42.573 "esnap_clone": false 00:24:42.573 } 00:24:42.573 } 00:24:42.573 } 00:24:42.573 ]' 00:24:42.573 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:42.573 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:42.573 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:42.573 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:42.573 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:42.573 18:26:16 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:42.573 18:26:16 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:24:42.573 18:26:16 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:42.831 18:26:17 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:24:42.832 18:26:17 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size df73143c-123f-4ead-ba5b-fb7ca38e1516 00:24:42.832 18:26:17 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=df73143c-123f-4ead-ba5b-fb7ca38e1516 00:24:42.832 18:26:17 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:42.832 18:26:17 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:42.832 18:26:17 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:42.832 18:26:17 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b df73143c-123f-4ead-ba5b-fb7ca38e1516 00:24:43.090 18:26:17 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:43.090 { 00:24:43.090 "name": "df73143c-123f-4ead-ba5b-fb7ca38e1516", 00:24:43.090 "aliases": [ 00:24:43.090 "lvs/nvme0n1p0" 00:24:43.090 ], 00:24:43.090 "product_name": "Logical Volume", 00:24:43.090 "block_size": 4096, 00:24:43.090 "num_blocks": 26476544, 00:24:43.090 "uuid": "df73143c-123f-4ead-ba5b-fb7ca38e1516", 00:24:43.090 "assigned_rate_limits": { 00:24:43.090 "rw_ios_per_sec": 0, 00:24:43.090 "rw_mbytes_per_sec": 0, 00:24:43.090 "r_mbytes_per_sec": 0, 00:24:43.090 "w_mbytes_per_sec": 0 00:24:43.090 }, 00:24:43.090 "claimed": false, 00:24:43.090 "zoned": false, 00:24:43.090 "supported_io_types": { 00:24:43.090 "read": true, 00:24:43.090 "write": true, 00:24:43.090 "unmap": true, 00:24:43.090 "flush": false, 00:24:43.090 "reset": true, 00:24:43.090 "nvme_admin": false, 00:24:43.090 "nvme_io": false, 00:24:43.090 "nvme_io_md": false, 00:24:43.090 "write_zeroes": true, 00:24:43.090 "zcopy": false, 00:24:43.090 "get_zone_info": false, 00:24:43.090 "zone_management": false, 00:24:43.090 "zone_append": false, 00:24:43.090 "compare": false, 00:24:43.090 "compare_and_write": false, 00:24:43.090 "abort": false, 00:24:43.090 "seek_hole": true, 00:24:43.090 "seek_data": true, 00:24:43.090 "copy": false, 00:24:43.090 "nvme_iov_md": false 00:24:43.090 }, 00:24:43.090 "driver_specific": { 00:24:43.090 "lvol": { 00:24:43.090 "lvol_store_uuid": "14594a72-31c6-4008-8481-4d20a12ccdec", 00:24:43.090 "base_bdev": "nvme0n1", 00:24:43.090 "thin_provision": true, 00:24:43.090 "num_allocated_clusters": 0, 00:24:43.090 "snapshot": false, 00:24:43.090 "clone": false, 00:24:43.090 "esnap_clone": false 00:24:43.090 } 00:24:43.090 } 00:24:43.090 } 00:24:43.090 ]' 00:24:43.091 18:26:17 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:43.349 18:26:17 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:43.349 18:26:17 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:43.349 18:26:17 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:43.349 18:26:17 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:43.349 18:26:17 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:43.349 18:26:17 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:24:43.349 18:26:17 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d df73143c-123f-4ead-ba5b-fb7ca38e1516 --l2p_dram_limit 10' 00:24:43.349 18:26:17 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:24:43.349 18:26:17 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:24:43.349 18:26:17 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:43.349 18:26:17 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:24:43.349 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:24:43.349 18:26:17 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d df73143c-123f-4ead-ba5b-fb7ca38e1516 --l2p_dram_limit 10 -c nvc0n1p0 00:24:43.609 [2024-11-26 18:26:17.852925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.609 [2024-11-26 18:26:17.853022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:43.609 [2024-11-26 18:26:17.853050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:43.609 [2024-11-26 18:26:17.853063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.609 [2024-11-26 18:26:17.853156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.609 [2024-11-26 18:26:17.853178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:43.609 [2024-11-26 18:26:17.853194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:43.609 [2024-11-26 18:26:17.853206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.609 [2024-11-26 18:26:17.853241] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:43.609 [2024-11-26 18:26:17.854338] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:43.609 [2024-11-26 18:26:17.854377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.609 [2024-11-26 18:26:17.854390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:43.609 [2024-11-26 18:26:17.854404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.140 ms 00:24:43.609 [2024-11-26 18:26:17.854415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.609 [2024-11-26 18:26:17.854647] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c808b541-5662-42d3-a4fb-479cccb27fb1 00:24:43.609 [2024-11-26 18:26:17.856723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.609 [2024-11-26 18:26:17.856763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:43.609 [2024-11-26 18:26:17.856778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:43.609 [2024-11-26 18:26:17.856793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.609 [2024-11-26 18:26:17.867139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.609 [2024-11-26 18:26:17.867205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:43.609 [2024-11-26 18:26:17.867221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.276 ms 00:24:43.609 [2024-11-26 18:26:17.867234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.609 [2024-11-26 18:26:17.867347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.609 [2024-11-26 18:26:17.867369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:43.609 [2024-11-26 18:26:17.867381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:43.609 [2024-11-26 18:26:17.867398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.609 [2024-11-26 18:26:17.867465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.609 [2024-11-26 18:26:17.867485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:43.609 [2024-11-26 18:26:17.867499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:43.609 [2024-11-26 18:26:17.867511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.609 [2024-11-26 18:26:17.867540] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:43.609 [2024-11-26 18:26:17.872360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.609 [2024-11-26 18:26:17.872584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:43.609 [2024-11-26 18:26:17.872619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.824 ms 00:24:43.609 [2024-11-26 18:26:17.872632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.609 [2024-11-26 18:26:17.872683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.609 [2024-11-26 18:26:17.872699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:43.609 [2024-11-26 18:26:17.872713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:43.609 [2024-11-26 18:26:17.872723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.609 [2024-11-26 18:26:17.872770] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:43.609 [2024-11-26 18:26:17.872930] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:43.609 [2024-11-26 18:26:17.872968] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:43.609 [2024-11-26 18:26:17.872982] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:43.609 [2024-11-26 18:26:17.872997] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:43.609 [2024-11-26 18:26:17.873009] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:43.609 [2024-11-26 18:26:17.873023] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:43.609 [2024-11-26 18:26:17.873035] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:43.609 [2024-11-26 18:26:17.873048] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:43.609 [2024-11-26 18:26:17.873057] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:43.609 [2024-11-26 18:26:17.873071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.609 [2024-11-26 18:26:17.873108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:43.609 [2024-11-26 18:26:17.873122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:24:43.609 [2024-11-26 18:26:17.873132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.609 [2024-11-26 18:26:17.873217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.609 [2024-11-26 18:26:17.873231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:43.609 [2024-11-26 18:26:17.873244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:43.609 [2024-11-26 18:26:17.873254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.609 [2024-11-26 18:26:17.873361] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:43.609 [2024-11-26 18:26:17.873377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:43.609 [2024-11-26 18:26:17.873390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.609 [2024-11-26 18:26:17.873401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.609 [2024-11-26 18:26:17.873413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:43.609 [2024-11-26 18:26:17.873422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:43.609 [2024-11-26 18:26:17.873433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:43.609 [2024-11-26 18:26:17.873442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:43.609 [2024-11-26 18:26:17.873454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:43.609 [2024-11-26 18:26:17.873463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.609 [2024-11-26 18:26:17.873474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:43.609 [2024-11-26 18:26:17.873483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:43.609 [2024-11-26 18:26:17.873495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:43.609 [2024-11-26 18:26:17.873504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:43.609 [2024-11-26 18:26:17.873515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:43.609 [2024-11-26 18:26:17.873525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.609 [2024-11-26 18:26:17.873539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:43.609 [2024-11-26 18:26:17.873549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:43.609 [2024-11-26 18:26:17.873562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.609 [2024-11-26 18:26:17.873571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:43.609 [2024-11-26 18:26:17.873583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:43.609 [2024-11-26 18:26:17.873592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.609 [2024-11-26 18:26:17.873603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:43.610 [2024-11-26 18:26:17.873632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:43.610 [2024-11-26 18:26:17.873664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.610 [2024-11-26 18:26:17.873674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:43.610 [2024-11-26 18:26:17.873686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:43.610 [2024-11-26 18:26:17.873695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.610 [2024-11-26 18:26:17.873707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:43.610 [2024-11-26 18:26:17.873716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:43.610 [2024-11-26 18:26:17.873728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:43.610 [2024-11-26 18:26:17.873752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:43.610 [2024-11-26 18:26:17.873767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:43.610 [2024-11-26 18:26:17.873777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.610 [2024-11-26 18:26:17.873788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:43.610 [2024-11-26 18:26:17.873798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:43.610 [2024-11-26 18:26:17.873810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:43.610 [2024-11-26 18:26:17.873820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:43.610 [2024-11-26 18:26:17.873831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:43.610 [2024-11-26 18:26:17.873841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.610 [2024-11-26 18:26:17.873852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:43.610 [2024-11-26 18:26:17.873862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:43.610 [2024-11-26 18:26:17.873873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.610 [2024-11-26 18:26:17.873882] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:43.610 [2024-11-26 18:26:17.873894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:43.610 [2024-11-26 18:26:17.873904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:43.610 [2024-11-26 18:26:17.873918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:43.610 [2024-11-26 18:26:17.873931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:43.610 [2024-11-26 18:26:17.873961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:43.610 [2024-11-26 18:26:17.873971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:43.610 [2024-11-26 18:26:17.873984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:43.610 [2024-11-26 18:26:17.873994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:43.610 [2024-11-26 18:26:17.874022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:43.610 [2024-11-26 18:26:17.874037] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:43.610 [2024-11-26 18:26:17.874070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.610 [2024-11-26 18:26:17.874083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:43.610 [2024-11-26 18:26:17.874096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:43.610 [2024-11-26 18:26:17.874107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:43.610 [2024-11-26 18:26:17.874120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:43.610 [2024-11-26 18:26:17.874130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:43.610 [2024-11-26 18:26:17.874143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:43.610 [2024-11-26 18:26:17.874154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:43.610 [2024-11-26 18:26:17.874167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:43.610 [2024-11-26 18:26:17.874179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:43.610 [2024-11-26 18:26:17.874194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:43.610 [2024-11-26 18:26:17.874205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:43.610 [2024-11-26 18:26:17.874218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:43.610 [2024-11-26 18:26:17.874229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:43.610 [2024-11-26 18:26:17.874243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:43.610 [2024-11-26 18:26:17.874254] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:43.610 [2024-11-26 18:26:17.874268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:43.610 [2024-11-26 18:26:17.874280] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:43.610 [2024-11-26 18:26:17.874293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:43.610 [2024-11-26 18:26:17.874304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:43.610 [2024-11-26 18:26:17.874317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:43.610 [2024-11-26 18:26:17.874329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:43.610 [2024-11-26 18:26:17.874342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:43.610 [2024-11-26 18:26:17.874352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.029 ms 00:24:43.610 [2024-11-26 18:26:17.874365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:43.610 [2024-11-26 18:26:17.874418] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:43.610 [2024-11-26 18:26:17.874446] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:46.916 [2024-11-26 18:26:20.620153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.916 [2024-11-26 18:26:20.620280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:46.916 [2024-11-26 18:26:20.620302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2745.731 ms 00:24:46.916 [2024-11-26 18:26:20.620320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.916 [2024-11-26 18:26:20.666297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.916 [2024-11-26 18:26:20.666376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:46.916 [2024-11-26 18:26:20.666396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.608 ms 00:24:46.916 [2024-11-26 18:26:20.666411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.916 [2024-11-26 18:26:20.666741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.916 [2024-11-26 18:26:20.666774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:46.916 [2024-11-26 18:26:20.666791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:46.916 [2024-11-26 18:26:20.666814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.916 [2024-11-26 18:26:20.712231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.916 [2024-11-26 18:26:20.712295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:46.916 [2024-11-26 18:26:20.712315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.237 ms 00:24:46.916 [2024-11-26 18:26:20.712331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.916 [2024-11-26 18:26:20.712399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.916 [2024-11-26 18:26:20.712418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:46.916 [2024-11-26 18:26:20.712430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:46.916 [2024-11-26 18:26:20.712456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.916 [2024-11-26 18:26:20.713151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.916 [2024-11-26 18:26:20.713180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:46.916 [2024-11-26 18:26:20.713194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:24:46.916 [2024-11-26 18:26:20.713207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.916 [2024-11-26 18:26:20.713364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.916 [2024-11-26 18:26:20.713386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:46.916 [2024-11-26 18:26:20.713399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:24:46.916 [2024-11-26 18:26:20.713415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.916 [2024-11-26 18:26:20.736367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.916 [2024-11-26 18:26:20.736419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:46.916 [2024-11-26 18:26:20.736437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.926 ms 00:24:46.916 [2024-11-26 18:26:20.736452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.916 [2024-11-26 18:26:20.761692] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:46.916 [2024-11-26 18:26:20.767550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.916 [2024-11-26 18:26:20.767603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:46.916 [2024-11-26 18:26:20.767645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.901 ms 00:24:46.916 [2024-11-26 18:26:20.767658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.916 [2024-11-26 18:26:20.845827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.916 [2024-11-26 18:26:20.845884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:46.916 [2024-11-26 18:26:20.845906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.116 ms 00:24:46.916 [2024-11-26 18:26:20.845919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.916 [2024-11-26 18:26:20.846158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.916 [2024-11-26 18:26:20.846177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:46.916 [2024-11-26 18:26:20.846198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:24:46.916 [2024-11-26 18:26:20.846208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.916 [2024-11-26 18:26:20.874823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.916 [2024-11-26 18:26:20.875128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:46.916 [2024-11-26 18:26:20.875167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.546 ms 00:24:46.917 [2024-11-26 18:26:20.875182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.917 [2024-11-26 18:26:20.906362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.917 [2024-11-26 18:26:20.906406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:46.917 [2024-11-26 18:26:20.906427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.114 ms 00:24:46.917 [2024-11-26 18:26:20.906439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.917 [2024-11-26 18:26:20.907463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.917 [2024-11-26 18:26:20.907697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:46.917 [2024-11-26 18:26:20.907737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:24:46.917 [2024-11-26 18:26:20.907751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.917 [2024-11-26 18:26:20.992946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.917 [2024-11-26 18:26:20.993002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:46.917 [2024-11-26 18:26:20.993029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.100 ms 00:24:46.917 [2024-11-26 18:26:20.993041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.917 [2024-11-26 18:26:21.023955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.917 [2024-11-26 18:26:21.023999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:46.917 [2024-11-26 18:26:21.024025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.808 ms 00:24:46.917 [2024-11-26 18:26:21.024037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.917 [2024-11-26 18:26:21.053122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.917 [2024-11-26 18:26:21.053165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:46.917 [2024-11-26 18:26:21.053192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.021 ms 00:24:46.917 [2024-11-26 18:26:21.053203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.917 [2024-11-26 18:26:21.081431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.917 [2024-11-26 18:26:21.081471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:46.917 [2024-11-26 18:26:21.081491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.174 ms 00:24:46.917 [2024-11-26 18:26:21.081501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.917 [2024-11-26 18:26:21.081626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.917 [2024-11-26 18:26:21.081646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:46.917 [2024-11-26 18:26:21.081666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:46.917 [2024-11-26 18:26:21.081677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.917 [2024-11-26 18:26:21.081841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.917 [2024-11-26 18:26:21.081864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:46.917 [2024-11-26 18:26:21.081894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:24:46.917 [2024-11-26 18:26:21.081904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.917 [2024-11-26 18:26:21.083397] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3229.924 ms, result 0 00:24:46.917 { 00:24:46.917 "name": "ftl0", 00:24:46.917 "uuid": "c808b541-5662-42d3-a4fb-479cccb27fb1" 00:24:46.917 } 00:24:46.917 18:26:21 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:24:46.917 18:26:21 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:47.176 18:26:21 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:24:47.176 18:26:21 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:47.176 [2024-11-26 18:26:21.614460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.176 [2024-11-26 18:26:21.614638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:47.176 [2024-11-26 18:26:21.614672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:47.176 [2024-11-26 18:26:21.614699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.176 [2024-11-26 18:26:21.614743] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:47.176 [2024-11-26 18:26:21.618719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.176 [2024-11-26 18:26:21.619007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:47.176 [2024-11-26 18:26:21.620248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.943 ms 00:24:47.176 [2024-11-26 18:26:21.620262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.176 [2024-11-26 18:26:21.620702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.176 [2024-11-26 18:26:21.620725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:47.176 [2024-11-26 18:26:21.620742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:24:47.176 [2024-11-26 18:26:21.620755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.176 [2024-11-26 18:26:21.623908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.176 [2024-11-26 18:26:21.623973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:47.176 [2024-11-26 18:26:21.623989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.109 ms 00:24:47.176 [2024-11-26 18:26:21.624000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.176 [2024-11-26 18:26:21.630224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.176 [2024-11-26 18:26:21.630259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:47.176 [2024-11-26 18:26:21.630292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.177 ms 00:24:47.176 [2024-11-26 18:26:21.630303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.437 [2024-11-26 18:26:21.661628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.437 [2024-11-26 18:26:21.661694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:47.437 [2024-11-26 18:26:21.661719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.218 ms 00:24:47.437 [2024-11-26 18:26:21.661731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.437 [2024-11-26 18:26:21.681153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.437 [2024-11-26 18:26:21.681192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:47.437 [2024-11-26 18:26:21.681212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.361 ms 00:24:47.437 [2024-11-26 18:26:21.681224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.437 [2024-11-26 18:26:21.681392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.438 [2024-11-26 18:26:21.681411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:47.438 [2024-11-26 18:26:21.681427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:24:47.438 [2024-11-26 18:26:21.681439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.438 [2024-11-26 18:26:21.712289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.438 [2024-11-26 18:26:21.712342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:47.438 [2024-11-26 18:26:21.712366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.814 ms 00:24:47.438 [2024-11-26 18:26:21.712378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.438 [2024-11-26 18:26:21.743380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.438 [2024-11-26 18:26:21.743720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:47.438 [2024-11-26 18:26:21.743760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.928 ms 00:24:47.438 [2024-11-26 18:26:21.743775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.438 [2024-11-26 18:26:21.775161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.438 [2024-11-26 18:26:21.775494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:47.438 [2024-11-26 18:26:21.775535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.290 ms 00:24:47.438 [2024-11-26 18:26:21.775549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.438 [2024-11-26 18:26:21.807812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.438 [2024-11-26 18:26:21.807893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:47.438 [2024-11-26 18:26:21.807923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.021 ms 00:24:47.438 [2024-11-26 18:26:21.807935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.438 [2024-11-26 18:26:21.808052] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:47.438 [2024-11-26 18:26:21.808082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.808991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.809003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.809018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.809030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.809045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.809058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.809073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.809085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.809100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.809114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.809131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.809144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.809161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.809174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:47.438 [2024-11-26 18:26:21.809192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:47.439 [2024-11-26 18:26:21.809637] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:47.439 [2024-11-26 18:26:21.809653] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c808b541-5662-42d3-a4fb-479cccb27fb1 00:24:47.439 [2024-11-26 18:26:21.809665] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:47.439 [2024-11-26 18:26:21.809683] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:47.439 [2024-11-26 18:26:21.809698] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:47.439 [2024-11-26 18:26:21.809712] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:47.439 [2024-11-26 18:26:21.809724] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:47.439 [2024-11-26 18:26:21.809738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:47.439 [2024-11-26 18:26:21.809749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:47.439 [2024-11-26 18:26:21.809763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:47.439 [2024-11-26 18:26:21.809773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:47.439 [2024-11-26 18:26:21.809788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.439 [2024-11-26 18:26:21.809799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:47.439 [2024-11-26 18:26:21.809816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.740 ms 00:24:47.439 [2024-11-26 18:26:21.809832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.439 [2024-11-26 18:26:21.828190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.439 [2024-11-26 18:26:21.828255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:47.439 [2024-11-26 18:26:21.828279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.249 ms 00:24:47.439 [2024-11-26 18:26:21.828292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.439 [2024-11-26 18:26:21.828881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.439 [2024-11-26 18:26:21.828906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:47.439 [2024-11-26 18:26:21.828930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:24:47.439 [2024-11-26 18:26:21.828942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.439 [2024-11-26 18:26:21.889074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.439 [2024-11-26 18:26:21.889155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:47.439 [2024-11-26 18:26:21.889181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.439 [2024-11-26 18:26:21.889194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.439 [2024-11-26 18:26:21.889314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.439 [2024-11-26 18:26:21.889331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:47.439 [2024-11-26 18:26:21.889352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.439 [2024-11-26 18:26:21.889365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.439 [2024-11-26 18:26:21.889526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.439 [2024-11-26 18:26:21.889548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:47.439 [2024-11-26 18:26:21.889601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.439 [2024-11-26 18:26:21.889614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.439 [2024-11-26 18:26:21.889670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.439 [2024-11-26 18:26:21.889695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:47.439 [2024-11-26 18:26:21.889711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.439 [2024-11-26 18:26:21.889727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.699 [2024-11-26 18:26:22.006419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.699 [2024-11-26 18:26:22.006504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:47.699 [2024-11-26 18:26:22.006586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.699 [2024-11-26 18:26:22.006611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.699 [2024-11-26 18:26:22.100302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.699 [2024-11-26 18:26:22.100392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:47.699 [2024-11-26 18:26:22.100420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.699 [2024-11-26 18:26:22.100434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.699 [2024-11-26 18:26:22.100645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.699 [2024-11-26 18:26:22.100675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:47.699 [2024-11-26 18:26:22.100693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.699 [2024-11-26 18:26:22.100705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.699 [2024-11-26 18:26:22.100793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.699 [2024-11-26 18:26:22.100812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:47.699 [2024-11-26 18:26:22.100829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.699 [2024-11-26 18:26:22.100842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.699 [2024-11-26 18:26:22.100993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.699 [2024-11-26 18:26:22.101014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:47.699 [2024-11-26 18:26:22.101030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.699 [2024-11-26 18:26:22.101042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.699 [2024-11-26 18:26:22.101142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.699 [2024-11-26 18:26:22.101165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:47.699 [2024-11-26 18:26:22.101184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.699 [2024-11-26 18:26:22.101197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.699 [2024-11-26 18:26:22.101283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.699 [2024-11-26 18:26:22.101305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:47.699 [2024-11-26 18:26:22.101322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.699 [2024-11-26 18:26:22.101335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.699 [2024-11-26 18:26:22.101416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.699 [2024-11-26 18:26:22.101434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:47.699 [2024-11-26 18:26:22.101451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.699 [2024-11-26 18:26:22.101463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.699 [2024-11-26 18:26:22.101704] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 487.179 ms, result 0 00:24:47.699 true 00:24:47.699 18:26:22 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79248 00:24:47.699 18:26:22 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79248 ']' 00:24:47.699 18:26:22 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79248 00:24:47.699 18:26:22 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:24:47.699 18:26:22 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.699 18:26:22 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79248 00:24:47.958 18:26:22 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:47.958 18:26:22 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:47.958 18:26:22 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79248' 00:24:47.958 killing process with pid 79248 00:24:47.958 18:26:22 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79248 00:24:47.958 18:26:22 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79248 00:24:53.222 18:26:27 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:24:57.411 262144+0 records in 00:24:57.411 262144+0 records out 00:24:57.411 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.41925 s, 243 MB/s 00:24:57.411 18:26:31 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:59.338 18:26:33 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:59.338 [2024-11-26 18:26:33.422556] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:24:59.338 [2024-11-26 18:26:33.422762] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79485 ] 00:24:59.338 [2024-11-26 18:26:33.610755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.338 [2024-11-26 18:26:33.758505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.906 [2024-11-26 18:26:34.087590] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:59.906 [2024-11-26 18:26:34.087685] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:59.906 [2024-11-26 18:26:34.257595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.907 [2024-11-26 18:26:34.257646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:59.907 [2024-11-26 18:26:34.257683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:59.907 [2024-11-26 18:26:34.257693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.907 [2024-11-26 18:26:34.257755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.907 [2024-11-26 18:26:34.257777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:59.907 [2024-11-26 18:26:34.257788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:59.907 [2024-11-26 18:26:34.257798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.907 [2024-11-26 18:26:34.257825] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:59.907 [2024-11-26 18:26:34.258631] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:59.907 [2024-11-26 18:26:34.258699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.907 [2024-11-26 18:26:34.258712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:59.907 [2024-11-26 18:26:34.258725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:24:59.907 [2024-11-26 18:26:34.258736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.907 [2024-11-26 18:26:34.260699] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:59.907 [2024-11-26 18:26:34.275905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.907 [2024-11-26 18:26:34.275961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:59.907 [2024-11-26 18:26:34.275993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.207 ms 00:24:59.907 [2024-11-26 18:26:34.276005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.907 [2024-11-26 18:26:34.276085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.907 [2024-11-26 18:26:34.276104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:59.907 [2024-11-26 18:26:34.276115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:59.907 [2024-11-26 18:26:34.276125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.907 [2024-11-26 18:26:34.285077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.907 [2024-11-26 18:26:34.285117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:59.907 [2024-11-26 18:26:34.285148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.860 ms 00:24:59.907 [2024-11-26 18:26:34.285173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.907 [2024-11-26 18:26:34.285279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.907 [2024-11-26 18:26:34.285312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:59.907 [2024-11-26 18:26:34.285323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:24:59.907 [2024-11-26 18:26:34.285333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.907 [2024-11-26 18:26:34.285386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.907 [2024-11-26 18:26:34.285402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:59.907 [2024-11-26 18:26:34.285413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:59.907 [2024-11-26 18:26:34.285424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.907 [2024-11-26 18:26:34.285468] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:59.907 [2024-11-26 18:26:34.290025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.907 [2024-11-26 18:26:34.290058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:59.907 [2024-11-26 18:26:34.290097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.564 ms 00:24:59.907 [2024-11-26 18:26:34.290111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.907 [2024-11-26 18:26:34.290144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.907 [2024-11-26 18:26:34.290158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:59.907 [2024-11-26 18:26:34.290169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:59.907 [2024-11-26 18:26:34.290179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.907 [2024-11-26 18:26:34.290238] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:59.907 [2024-11-26 18:26:34.290267] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:59.907 [2024-11-26 18:26:34.290321] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:59.907 [2024-11-26 18:26:34.290344] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:59.907 [2024-11-26 18:26:34.290437] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:59.907 [2024-11-26 18:26:34.290452] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:59.907 [2024-11-26 18:26:34.290465] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:59.907 [2024-11-26 18:26:34.290478] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:59.907 [2024-11-26 18:26:34.290489] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:59.907 [2024-11-26 18:26:34.290500] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:59.907 [2024-11-26 18:26:34.290510] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:59.907 [2024-11-26 18:26:34.290549] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:59.907 [2024-11-26 18:26:34.290595] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:59.907 [2024-11-26 18:26:34.290611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.907 [2024-11-26 18:26:34.290623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:59.907 [2024-11-26 18:26:34.290634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:24:59.907 [2024-11-26 18:26:34.290645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.907 [2024-11-26 18:26:34.290747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.907 [2024-11-26 18:26:34.290763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:59.907 [2024-11-26 18:26:34.290774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:24:59.907 [2024-11-26 18:26:34.290784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.907 [2024-11-26 18:26:34.290949] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:59.907 [2024-11-26 18:26:34.290982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:59.907 [2024-11-26 18:26:34.290994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:59.907 [2024-11-26 18:26:34.291004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.907 [2024-11-26 18:26:34.291015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:59.907 [2024-11-26 18:26:34.291024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:59.907 [2024-11-26 18:26:34.291033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:59.907 [2024-11-26 18:26:34.291043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:59.907 [2024-11-26 18:26:34.291052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:59.907 [2024-11-26 18:26:34.291060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:59.907 [2024-11-26 18:26:34.291085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:59.907 [2024-11-26 18:26:34.291094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:59.907 [2024-11-26 18:26:34.291103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:59.907 [2024-11-26 18:26:34.291126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:59.907 [2024-11-26 18:26:34.291137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:59.907 [2024-11-26 18:26:34.291147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.907 [2024-11-26 18:26:34.291158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:59.907 [2024-11-26 18:26:34.291168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:59.907 [2024-11-26 18:26:34.291178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.907 [2024-11-26 18:26:34.291188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:59.907 [2024-11-26 18:26:34.291198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:59.907 [2024-11-26 18:26:34.291208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:59.907 [2024-11-26 18:26:34.291218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:59.908 [2024-11-26 18:26:34.291228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:59.908 [2024-11-26 18:26:34.291237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:59.908 [2024-11-26 18:26:34.291247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:59.908 [2024-11-26 18:26:34.291257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:59.908 [2024-11-26 18:26:34.291266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:59.908 [2024-11-26 18:26:34.291276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:59.908 [2024-11-26 18:26:34.291286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:59.908 [2024-11-26 18:26:34.291295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:59.908 [2024-11-26 18:26:34.291305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:59.908 [2024-11-26 18:26:34.291314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:59.908 [2024-11-26 18:26:34.291324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:59.908 [2024-11-26 18:26:34.291333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:59.908 [2024-11-26 18:26:34.291344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:59.908 [2024-11-26 18:26:34.291353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:59.908 [2024-11-26 18:26:34.291363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:59.908 [2024-11-26 18:26:34.291373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:59.908 [2024-11-26 18:26:34.291382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.908 [2024-11-26 18:26:34.291392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:59.908 [2024-11-26 18:26:34.291401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:59.908 [2024-11-26 18:26:34.291426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.908 [2024-11-26 18:26:34.291435] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:59.908 [2024-11-26 18:26:34.291446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:59.908 [2024-11-26 18:26:34.291460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:59.908 [2024-11-26 18:26:34.291470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:59.908 [2024-11-26 18:26:34.291482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:59.908 [2024-11-26 18:26:34.291492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:59.908 [2024-11-26 18:26:34.291502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:59.908 [2024-11-26 18:26:34.291513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:59.908 [2024-11-26 18:26:34.291522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:59.908 [2024-11-26 18:26:34.291532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:59.908 [2024-11-26 18:26:34.291543] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:59.908 [2024-11-26 18:26:34.291556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:59.908 [2024-11-26 18:26:34.291572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:59.908 [2024-11-26 18:26:34.291583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:59.908 [2024-11-26 18:26:34.291593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:59.908 [2024-11-26 18:26:34.291604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:59.908 [2024-11-26 18:26:34.291614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:59.908 [2024-11-26 18:26:34.291624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:59.908 [2024-11-26 18:26:34.291634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:59.908 [2024-11-26 18:26:34.291644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:59.908 [2024-11-26 18:26:34.291670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:59.908 [2024-11-26 18:26:34.291681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:59.908 [2024-11-26 18:26:34.291692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:59.908 [2024-11-26 18:26:34.291703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:59.908 [2024-11-26 18:26:34.291713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:59.908 [2024-11-26 18:26:34.291724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:59.908 [2024-11-26 18:26:34.291734] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:59.908 [2024-11-26 18:26:34.291746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:59.908 [2024-11-26 18:26:34.291757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:59.908 [2024-11-26 18:26:34.291768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:59.908 [2024-11-26 18:26:34.291779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:59.908 [2024-11-26 18:26:34.291789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:59.908 [2024-11-26 18:26:34.291801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.908 [2024-11-26 18:26:34.291812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:59.908 [2024-11-26 18:26:34.291824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.965 ms 00:24:59.908 [2024-11-26 18:26:34.291835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.908 [2024-11-26 18:26:34.328869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.908 [2024-11-26 18:26:34.328929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:59.908 [2024-11-26 18:26:34.328964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.973 ms 00:24:59.908 [2024-11-26 18:26:34.328980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:59.908 [2024-11-26 18:26:34.329080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:59.908 [2024-11-26 18:26:34.329094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:59.908 [2024-11-26 18:26:34.329105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:59.908 [2024-11-26 18:26:34.329115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.167 [2024-11-26 18:26:34.379472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.167 [2024-11-26 18:26:34.379525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:00.167 [2024-11-26 18:26:34.379558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.273 ms 00:25:00.167 [2024-11-26 18:26:34.379583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.167 [2024-11-26 18:26:34.379659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.167 [2024-11-26 18:26:34.379674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:00.167 [2024-11-26 18:26:34.379714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:00.167 [2024-11-26 18:26:34.379725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.167 [2024-11-26 18:26:34.380437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.167 [2024-11-26 18:26:34.380476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:00.167 [2024-11-26 18:26:34.380488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:25:00.167 [2024-11-26 18:26:34.380499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.167 [2024-11-26 18:26:34.380689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.167 [2024-11-26 18:26:34.380709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:00.167 [2024-11-26 18:26:34.380732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:25:00.167 [2024-11-26 18:26:34.380742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.167 [2024-11-26 18:26:34.399497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.167 [2024-11-26 18:26:34.399543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:00.168 [2024-11-26 18:26:34.399590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.729 ms 00:25:00.168 [2024-11-26 18:26:34.399604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.168 [2024-11-26 18:26:34.414149] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:00.168 [2024-11-26 18:26:34.414353] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:00.168 [2024-11-26 18:26:34.414377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.168 [2024-11-26 18:26:34.414389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:00.168 [2024-11-26 18:26:34.414401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.620 ms 00:25:00.168 [2024-11-26 18:26:34.414413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.168 [2024-11-26 18:26:34.439200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.168 [2024-11-26 18:26:34.439252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:00.168 [2024-11-26 18:26:34.439284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.745 ms 00:25:00.168 [2024-11-26 18:26:34.439295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.168 [2024-11-26 18:26:34.452594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.168 [2024-11-26 18:26:34.452639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:00.168 [2024-11-26 18:26:34.452670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.256 ms 00:25:00.168 [2024-11-26 18:26:34.452681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.168 [2024-11-26 18:26:34.465443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.168 [2024-11-26 18:26:34.465481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:00.168 [2024-11-26 18:26:34.465512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.723 ms 00:25:00.168 [2024-11-26 18:26:34.465523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.168 [2024-11-26 18:26:34.466385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.168 [2024-11-26 18:26:34.466420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:00.168 [2024-11-26 18:26:34.466435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:25:00.168 [2024-11-26 18:26:34.466450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.168 [2024-11-26 18:26:34.534008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.168 [2024-11-26 18:26:34.534091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:00.168 [2024-11-26 18:26:34.534145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.533 ms 00:25:00.168 [2024-11-26 18:26:34.534171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.168 [2024-11-26 18:26:34.546325] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:00.168 [2024-11-26 18:26:34.549230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.168 [2024-11-26 18:26:34.549470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:00.168 [2024-11-26 18:26:34.549499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.992 ms 00:25:00.168 [2024-11-26 18:26:34.549520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.168 [2024-11-26 18:26:34.549674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.168 [2024-11-26 18:26:34.549707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:00.168 [2024-11-26 18:26:34.549736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:00.168 [2024-11-26 18:26:34.549748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.168 [2024-11-26 18:26:34.549949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.168 [2024-11-26 18:26:34.549969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:00.168 [2024-11-26 18:26:34.549983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:25:00.168 [2024-11-26 18:26:34.549994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.168 [2024-11-26 18:26:34.550028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.168 [2024-11-26 18:26:34.550042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:00.168 [2024-11-26 18:26:34.550055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:00.168 [2024-11-26 18:26:34.550067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.168 [2024-11-26 18:26:34.550123] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:00.168 [2024-11-26 18:26:34.550148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.168 [2024-11-26 18:26:34.550160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:00.168 [2024-11-26 18:26:34.550173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:00.168 [2024-11-26 18:26:34.550184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.168 [2024-11-26 18:26:34.581624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.168 [2024-11-26 18:26:34.581668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:00.168 [2024-11-26 18:26:34.581699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.403 ms 00:25:00.168 [2024-11-26 18:26:34.581721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.168 [2024-11-26 18:26:34.581823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.168 [2024-11-26 18:26:34.581841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:00.168 [2024-11-26 18:26:34.581853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:00.168 [2024-11-26 18:26:34.581862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.168 [2024-11-26 18:26:34.583641] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 325.265 ms, result 0 00:25:01.539  [2024-11-26T18:26:36.932Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-26T18:26:37.866Z] Copying: 46/1024 [MB] (23 MBps) [2024-11-26T18:26:38.800Z] Copying: 69/1024 [MB] (23 MBps) [2024-11-26T18:26:39.743Z] Copying: 92/1024 [MB] (23 MBps) [2024-11-26T18:26:40.677Z] Copying: 116/1024 [MB] (24 MBps) [2024-11-26T18:26:41.614Z] Copying: 140/1024 [MB] (23 MBps) [2024-11-26T18:26:42.987Z] Copying: 163/1024 [MB] (23 MBps) [2024-11-26T18:26:43.922Z] Copying: 187/1024 [MB] (23 MBps) [2024-11-26T18:26:44.885Z] Copying: 210/1024 [MB] (23 MBps) [2024-11-26T18:26:45.821Z] Copying: 233/1024 [MB] (22 MBps) [2024-11-26T18:26:46.757Z] Copying: 256/1024 [MB] (23 MBps) [2024-11-26T18:26:47.692Z] Copying: 278/1024 [MB] (22 MBps) [2024-11-26T18:26:48.635Z] Copying: 301/1024 [MB] (22 MBps) [2024-11-26T18:26:50.008Z] Copying: 325/1024 [MB] (23 MBps) [2024-11-26T18:26:50.941Z] Copying: 349/1024 [MB] (24 MBps) [2024-11-26T18:26:51.877Z] Copying: 372/1024 [MB] (22 MBps) [2024-11-26T18:26:52.811Z] Copying: 395/1024 [MB] (22 MBps) [2024-11-26T18:26:53.746Z] Copying: 418/1024 [MB] (23 MBps) [2024-11-26T18:26:54.699Z] Copying: 440/1024 [MB] (22 MBps) [2024-11-26T18:26:55.632Z] Copying: 463/1024 [MB] (22 MBps) [2024-11-26T18:26:57.008Z] Copying: 486/1024 [MB] (22 MBps) [2024-11-26T18:26:57.942Z] Copying: 508/1024 [MB] (22 MBps) [2024-11-26T18:26:58.876Z] Copying: 532/1024 [MB] (23 MBps) [2024-11-26T18:26:59.809Z] Copying: 556/1024 [MB] (24 MBps) [2024-11-26T18:27:00.745Z] Copying: 580/1024 [MB] (24 MBps) [2024-11-26T18:27:01.682Z] Copying: 603/1024 [MB] (23 MBps) [2024-11-26T18:27:02.621Z] Copying: 626/1024 [MB] (22 MBps) [2024-11-26T18:27:03.997Z] Copying: 648/1024 [MB] (22 MBps) [2024-11-26T18:27:04.930Z] Copying: 671/1024 [MB] (22 MBps) [2024-11-26T18:27:05.881Z] Copying: 693/1024 [MB] (22 MBps) [2024-11-26T18:27:06.814Z] Copying: 714/1024 [MB] (21 MBps) [2024-11-26T18:27:07.765Z] Copying: 737/1024 [MB] (22 MBps) [2024-11-26T18:27:08.717Z] Copying: 758/1024 [MB] (21 MBps) [2024-11-26T18:27:09.652Z] Copying: 781/1024 [MB] (22 MBps) [2024-11-26T18:27:11.027Z] Copying: 803/1024 [MB] (22 MBps) [2024-11-26T18:27:11.961Z] Copying: 827/1024 [MB] (23 MBps) [2024-11-26T18:27:12.898Z] Copying: 853/1024 [MB] (25 MBps) [2024-11-26T18:27:13.833Z] Copying: 876/1024 [MB] (23 MBps) [2024-11-26T18:27:14.768Z] Copying: 898/1024 [MB] (22 MBps) [2024-11-26T18:27:15.705Z] Copying: 921/1024 [MB] (22 MBps) [2024-11-26T18:27:16.638Z] Copying: 943/1024 [MB] (22 MBps) [2024-11-26T18:27:18.020Z] Copying: 965/1024 [MB] (22 MBps) [2024-11-26T18:27:18.953Z] Copying: 988/1024 [MB] (22 MBps) [2024-11-26T18:27:19.520Z] Copying: 1010/1024 [MB] (22 MBps) [2024-11-26T18:27:19.520Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-26 18:27:19.262288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.059 [2024-11-26 18:27:19.262362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:45.059 [2024-11-26 18:27:19.262385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:45.059 [2024-11-26 18:27:19.262399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.059 [2024-11-26 18:27:19.262428] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:45.059 [2024-11-26 18:27:19.266452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.059 [2024-11-26 18:27:19.266488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:45.059 [2024-11-26 18:27:19.266518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.002 ms 00:25:45.059 [2024-11-26 18:27:19.266530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.059 [2024-11-26 18:27:19.268606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.059 [2024-11-26 18:27:19.268642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:45.059 [2024-11-26 18:27:19.268659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.029 ms 00:25:45.059 [2024-11-26 18:27:19.268670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.059 [2024-11-26 18:27:19.285734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.059 [2024-11-26 18:27:19.285817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:45.059 [2024-11-26 18:27:19.285852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.042 ms 00:25:45.059 [2024-11-26 18:27:19.285864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.059 [2024-11-26 18:27:19.293340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.059 [2024-11-26 18:27:19.293419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:45.059 [2024-11-26 18:27:19.293449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.423 ms 00:25:45.059 [2024-11-26 18:27:19.293459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.059 [2024-11-26 18:27:19.329087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.059 [2024-11-26 18:27:19.329130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:45.059 [2024-11-26 18:27:19.329164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.488 ms 00:25:45.059 [2024-11-26 18:27:19.329176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.059 [2024-11-26 18:27:19.349663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.059 [2024-11-26 18:27:19.349719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:45.059 [2024-11-26 18:27:19.349738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.443 ms 00:25:45.060 [2024-11-26 18:27:19.349749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.060 [2024-11-26 18:27:19.349896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.060 [2024-11-26 18:27:19.349930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:45.060 [2024-11-26 18:27:19.349944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:25:45.060 [2024-11-26 18:27:19.349956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.060 [2024-11-26 18:27:19.384132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.060 [2024-11-26 18:27:19.384190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:45.060 [2024-11-26 18:27:19.384209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.154 ms 00:25:45.060 [2024-11-26 18:27:19.384219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.060 [2024-11-26 18:27:19.417997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.060 [2024-11-26 18:27:19.418054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:45.060 [2024-11-26 18:27:19.418087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.735 ms 00:25:45.060 [2024-11-26 18:27:19.418098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.060 [2024-11-26 18:27:19.451157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.060 [2024-11-26 18:27:19.451393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:45.060 [2024-11-26 18:27:19.451419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.016 ms 00:25:45.060 [2024-11-26 18:27:19.451447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.060 [2024-11-26 18:27:19.484146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.060 [2024-11-26 18:27:19.484187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:45.060 [2024-11-26 18:27:19.484220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.605 ms 00:25:45.060 [2024-11-26 18:27:19.484231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.060 [2024-11-26 18:27:19.484286] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:45.060 [2024-11-26 18:27:19.484308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.484995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:45.060 [2024-11-26 18:27:19.485311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:45.061 [2024-11-26 18:27:19.485682] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:45.061 [2024-11-26 18:27:19.485704] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c808b541-5662-42d3-a4fb-479cccb27fb1 00:25:45.061 [2024-11-26 18:27:19.485717] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:45.061 [2024-11-26 18:27:19.485728] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:45.061 [2024-11-26 18:27:19.485739] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:45.061 [2024-11-26 18:27:19.485751] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:45.061 [2024-11-26 18:27:19.485762] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:45.061 [2024-11-26 18:27:19.485791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:45.061 [2024-11-26 18:27:19.485802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:45.061 [2024-11-26 18:27:19.485812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:45.061 [2024-11-26 18:27:19.485823] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:45.061 [2024-11-26 18:27:19.485834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.061 [2024-11-26 18:27:19.485845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:45.061 [2024-11-26 18:27:19.485858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.563 ms 00:25:45.061 [2024-11-26 18:27:19.485869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.061 [2024-11-26 18:27:19.504506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.061 [2024-11-26 18:27:19.504612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:45.061 [2024-11-26 18:27:19.504630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.586 ms 00:25:45.061 [2024-11-26 18:27:19.504642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.061 [2024-11-26 18:27:19.505150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.061 [2024-11-26 18:27:19.505176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:45.061 [2024-11-26 18:27:19.505190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:25:45.061 [2024-11-26 18:27:19.505216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.320 [2024-11-26 18:27:19.555219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.320 [2024-11-26 18:27:19.555475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:45.320 [2024-11-26 18:27:19.555502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.320 [2024-11-26 18:27:19.555515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.320 [2024-11-26 18:27:19.555622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.320 [2024-11-26 18:27:19.555641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:45.320 [2024-11-26 18:27:19.555654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.320 [2024-11-26 18:27:19.555672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.320 [2024-11-26 18:27:19.555757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.320 [2024-11-26 18:27:19.555776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:45.320 [2024-11-26 18:27:19.555789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.320 [2024-11-26 18:27:19.555801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.320 [2024-11-26 18:27:19.555824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.320 [2024-11-26 18:27:19.555837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:45.320 [2024-11-26 18:27:19.555850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.320 [2024-11-26 18:27:19.555860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.320 [2024-11-26 18:27:19.674270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.320 [2024-11-26 18:27:19.674358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:45.320 [2024-11-26 18:27:19.674379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.320 [2024-11-26 18:27:19.674392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.320 [2024-11-26 18:27:19.768033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.320 [2024-11-26 18:27:19.768142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:45.320 [2024-11-26 18:27:19.768190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.320 [2024-11-26 18:27:19.768232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.320 [2024-11-26 18:27:19.768359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.320 [2024-11-26 18:27:19.768383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:45.320 [2024-11-26 18:27:19.768396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.320 [2024-11-26 18:27:19.768408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.320 [2024-11-26 18:27:19.768468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.320 [2024-11-26 18:27:19.768497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:45.320 [2024-11-26 18:27:19.768510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.320 [2024-11-26 18:27:19.768521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.320 [2024-11-26 18:27:19.768752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.320 [2024-11-26 18:27:19.768774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:45.320 [2024-11-26 18:27:19.768802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.320 [2024-11-26 18:27:19.768814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.320 [2024-11-26 18:27:19.768869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.320 [2024-11-26 18:27:19.768903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:45.320 [2024-11-26 18:27:19.768916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.320 [2024-11-26 18:27:19.768926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.320 [2024-11-26 18:27:19.768981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.320 [2024-11-26 18:27:19.769011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:45.320 [2024-11-26 18:27:19.769039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.320 [2024-11-26 18:27:19.769050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.320 [2024-11-26 18:27:19.769133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:45.320 [2024-11-26 18:27:19.769148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:45.320 [2024-11-26 18:27:19.769161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:45.320 [2024-11-26 18:27:19.769172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.320 [2024-11-26 18:27:19.769373] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 507.045 ms, result 0 00:25:46.722 00:25:46.722 00:25:46.722 18:27:20 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:25:46.722 [2024-11-26 18:27:21.032955] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:25:46.722 [2024-11-26 18:27:21.033455] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79949 ] 00:25:46.980 [2024-11-26 18:27:21.226284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.980 [2024-11-26 18:27:21.373287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.547 [2024-11-26 18:27:21.769762] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:47.547 [2024-11-26 18:27:21.770130] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:47.547 [2024-11-26 18:27:21.931672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.547 [2024-11-26 18:27:21.932058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:47.547 [2024-11-26 18:27:21.932089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:47.547 [2024-11-26 18:27:21.932103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.547 [2024-11-26 18:27:21.932182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.547 [2024-11-26 18:27:21.932204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:47.547 [2024-11-26 18:27:21.932216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:25:47.547 [2024-11-26 18:27:21.932227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.547 [2024-11-26 18:27:21.932259] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:47.547 [2024-11-26 18:27:21.933229] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:47.547 [2024-11-26 18:27:21.933439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.547 [2024-11-26 18:27:21.933457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:47.547 [2024-11-26 18:27:21.933470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.185 ms 00:25:47.547 [2024-11-26 18:27:21.933481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.547 [2024-11-26 18:27:21.935509] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:47.547 [2024-11-26 18:27:21.950233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.547 [2024-11-26 18:27:21.950294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:47.547 [2024-11-26 18:27:21.950329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.739 ms 00:25:47.547 [2024-11-26 18:27:21.950340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.547 [2024-11-26 18:27:21.950416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.547 [2024-11-26 18:27:21.950434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:47.547 [2024-11-26 18:27:21.950446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:47.547 [2024-11-26 18:27:21.950456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.547 [2024-11-26 18:27:21.959332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.547 [2024-11-26 18:27:21.959564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:47.547 [2024-11-26 18:27:21.959591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.781 ms 00:25:47.547 [2024-11-26 18:27:21.959613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.547 [2024-11-26 18:27:21.959711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.547 [2024-11-26 18:27:21.959729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:47.548 [2024-11-26 18:27:21.959741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:47.548 [2024-11-26 18:27:21.959752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.548 [2024-11-26 18:27:21.959811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.548 [2024-11-26 18:27:21.959827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:47.548 [2024-11-26 18:27:21.959854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:47.548 [2024-11-26 18:27:21.959865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.548 [2024-11-26 18:27:21.959902] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:47.548 [2024-11-26 18:27:21.964439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.548 [2024-11-26 18:27:21.964475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:47.548 [2024-11-26 18:27:21.964510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.545 ms 00:25:47.548 [2024-11-26 18:27:21.964520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.548 [2024-11-26 18:27:21.964554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.548 [2024-11-26 18:27:21.964607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:47.548 [2024-11-26 18:27:21.964624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:47.548 [2024-11-26 18:27:21.964635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.548 [2024-11-26 18:27:21.964704] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:47.548 [2024-11-26 18:27:21.964735] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:47.548 [2024-11-26 18:27:21.964787] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:47.548 [2024-11-26 18:27:21.964811] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:47.548 [2024-11-26 18:27:21.964911] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:47.548 [2024-11-26 18:27:21.964926] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:47.548 [2024-11-26 18:27:21.964954] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:47.548 [2024-11-26 18:27:21.964968] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:47.548 [2024-11-26 18:27:21.964981] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:47.548 [2024-11-26 18:27:21.964992] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:47.548 [2024-11-26 18:27:21.965003] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:47.548 [2024-11-26 18:27:21.965018] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:47.548 [2024-11-26 18:27:21.965030] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:47.548 [2024-11-26 18:27:21.965041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.548 [2024-11-26 18:27:21.965052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:47.548 [2024-11-26 18:27:21.965064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:25:47.548 [2024-11-26 18:27:21.965075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.548 [2024-11-26 18:27:21.965159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.548 [2024-11-26 18:27:21.965173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:47.548 [2024-11-26 18:27:21.965184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:47.548 [2024-11-26 18:27:21.965194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.548 [2024-11-26 18:27:21.965304] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:47.548 [2024-11-26 18:27:21.965323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:47.548 [2024-11-26 18:27:21.965334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:47.548 [2024-11-26 18:27:21.965346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.548 [2024-11-26 18:27:21.965357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:47.548 [2024-11-26 18:27:21.965366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:47.548 [2024-11-26 18:27:21.965376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:47.548 [2024-11-26 18:27:21.965386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:47.548 [2024-11-26 18:27:21.965396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:47.548 [2024-11-26 18:27:21.965408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:47.548 [2024-11-26 18:27:21.965419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:47.548 [2024-11-26 18:27:21.965429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:47.548 [2024-11-26 18:27:21.965439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:47.548 [2024-11-26 18:27:21.965461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:47.548 [2024-11-26 18:27:21.965472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:47.548 [2024-11-26 18:27:21.965482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.548 [2024-11-26 18:27:21.965492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:47.548 [2024-11-26 18:27:21.965502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:47.548 [2024-11-26 18:27:21.965513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.548 [2024-11-26 18:27:21.965523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:47.548 [2024-11-26 18:27:21.965533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:47.548 [2024-11-26 18:27:21.965543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.548 [2024-11-26 18:27:21.965553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:47.548 [2024-11-26 18:27:21.965562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:47.548 [2024-11-26 18:27:21.965572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.548 [2024-11-26 18:27:21.965598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:47.548 [2024-11-26 18:27:21.965612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:47.548 [2024-11-26 18:27:21.965621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.548 [2024-11-26 18:27:21.965631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:47.548 [2024-11-26 18:27:21.965641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:47.548 [2024-11-26 18:27:21.965651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:47.548 [2024-11-26 18:27:21.965660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:47.548 [2024-11-26 18:27:21.965670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:47.548 [2024-11-26 18:27:21.965680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:47.548 [2024-11-26 18:27:21.965690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:47.548 [2024-11-26 18:27:21.965700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:47.548 [2024-11-26 18:27:21.965710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:47.548 [2024-11-26 18:27:21.965719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:47.548 [2024-11-26 18:27:21.965729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:47.548 [2024-11-26 18:27:21.965738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.548 [2024-11-26 18:27:21.965747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:47.549 [2024-11-26 18:27:21.965758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:47.549 [2024-11-26 18:27:21.965768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.549 [2024-11-26 18:27:21.965777] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:47.549 [2024-11-26 18:27:21.965788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:47.549 [2024-11-26 18:27:21.965799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:47.549 [2024-11-26 18:27:21.965809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:47.549 [2024-11-26 18:27:21.965820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:47.549 [2024-11-26 18:27:21.965830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:47.549 [2024-11-26 18:27:21.965839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:47.549 [2024-11-26 18:27:21.965850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:47.549 [2024-11-26 18:27:21.965859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:47.549 [2024-11-26 18:27:21.965870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:47.549 [2024-11-26 18:27:21.965881] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:47.549 [2024-11-26 18:27:21.965895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:47.549 [2024-11-26 18:27:21.965912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:47.549 [2024-11-26 18:27:21.965923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:47.549 [2024-11-26 18:27:21.965933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:47.549 [2024-11-26 18:27:21.965944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:47.549 [2024-11-26 18:27:21.965954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:47.549 [2024-11-26 18:27:21.965964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:47.549 [2024-11-26 18:27:21.965975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:47.549 [2024-11-26 18:27:21.965985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:47.549 [2024-11-26 18:27:21.965995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:47.549 [2024-11-26 18:27:21.966005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:47.549 [2024-11-26 18:27:21.966016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:47.549 [2024-11-26 18:27:21.966026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:47.549 [2024-11-26 18:27:21.966036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:47.549 [2024-11-26 18:27:21.966046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:47.549 [2024-11-26 18:27:21.966056] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:47.549 [2024-11-26 18:27:21.966068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:47.549 [2024-11-26 18:27:21.966079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:47.549 [2024-11-26 18:27:21.966090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:47.549 [2024-11-26 18:27:21.966103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:47.549 [2024-11-26 18:27:21.966114] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:47.549 [2024-11-26 18:27:21.966125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.549 [2024-11-26 18:27:21.966137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:47.549 [2024-11-26 18:27:21.966150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:25:47.549 [2024-11-26 18:27:21.966161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.549 [2024-11-26 18:27:22.001701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.549 [2024-11-26 18:27:22.002036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:47.549 [2024-11-26 18:27:22.002067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.480 ms 00:25:47.549 [2024-11-26 18:27:22.002088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.549 [2024-11-26 18:27:22.002204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.549 [2024-11-26 18:27:22.002219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:47.549 [2024-11-26 18:27:22.002232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:47.549 [2024-11-26 18:27:22.002243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.808 [2024-11-26 18:27:22.053175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.808 [2024-11-26 18:27:22.053276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:47.808 [2024-11-26 18:27:22.053328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.832 ms 00:25:47.808 [2024-11-26 18:27:22.053341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.808 [2024-11-26 18:27:22.053423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.808 [2024-11-26 18:27:22.053440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:47.808 [2024-11-26 18:27:22.053458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:47.808 [2024-11-26 18:27:22.053469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.808 [2024-11-26 18:27:22.054234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.808 [2024-11-26 18:27:22.054260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:47.808 [2024-11-26 18:27:22.054274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:25:47.808 [2024-11-26 18:27:22.054286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.808 [2024-11-26 18:27:22.054466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.808 [2024-11-26 18:27:22.054491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:47.808 [2024-11-26 18:27:22.054511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:25:47.808 [2024-11-26 18:27:22.054522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.808 [2024-11-26 18:27:22.074054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.808 [2024-11-26 18:27:22.074231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:47.808 [2024-11-26 18:27:22.074376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.466 ms 00:25:47.808 [2024-11-26 18:27:22.074427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.808 [2024-11-26 18:27:22.091082] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:47.808 [2024-11-26 18:27:22.091350] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:47.808 [2024-11-26 18:27:22.091469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.808 [2024-11-26 18:27:22.091508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:47.808 [2024-11-26 18:27:22.091637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.840 ms 00:25:47.808 [2024-11-26 18:27:22.091685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.808 [2024-11-26 18:27:22.121940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.808 [2024-11-26 18:27:22.122104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:47.808 [2024-11-26 18:27:22.122217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.182 ms 00:25:47.808 [2024-11-26 18:27:22.122291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.808 [2024-11-26 18:27:22.138396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.808 [2024-11-26 18:27:22.138434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:47.808 [2024-11-26 18:27:22.138464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.871 ms 00:25:47.808 [2024-11-26 18:27:22.138474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.808 [2024-11-26 18:27:22.153584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.808 [2024-11-26 18:27:22.153652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:47.808 [2024-11-26 18:27:22.153685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.062 ms 00:25:47.808 [2024-11-26 18:27:22.153695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.808 [2024-11-26 18:27:22.154635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.808 [2024-11-26 18:27:22.154664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:47.808 [2024-11-26 18:27:22.154685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:25:47.808 [2024-11-26 18:27:22.154698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.808 [2024-11-26 18:27:22.229182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.808 [2024-11-26 18:27:22.229267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:47.808 [2024-11-26 18:27:22.229310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.450 ms 00:25:47.808 [2024-11-26 18:27:22.229322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.808 [2024-11-26 18:27:22.239536] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:47.808 [2024-11-26 18:27:22.242270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.808 [2024-11-26 18:27:22.242319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:47.808 [2024-11-26 18:27:22.242366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.881 ms 00:25:47.808 [2024-11-26 18:27:22.242378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.808 [2024-11-26 18:27:22.242467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.808 [2024-11-26 18:27:22.242485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:47.808 [2024-11-26 18:27:22.242502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:47.808 [2024-11-26 18:27:22.242513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.808 [2024-11-26 18:27:22.242649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.808 [2024-11-26 18:27:22.242669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:47.808 [2024-11-26 18:27:22.242682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:47.808 [2024-11-26 18:27:22.242694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.808 [2024-11-26 18:27:22.242726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.808 [2024-11-26 18:27:22.242741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:47.808 [2024-11-26 18:27:22.242767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:47.808 [2024-11-26 18:27:22.242778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:47.808 [2024-11-26 18:27:22.242825] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:47.808 [2024-11-26 18:27:22.242841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:47.808 [2024-11-26 18:27:22.242852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:47.808 [2024-11-26 18:27:22.242879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:47.808 [2024-11-26 18:27:22.242890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.067 [2024-11-26 18:27:22.268757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.067 [2024-11-26 18:27:22.268946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:48.067 [2024-11-26 18:27:22.269073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.839 ms 00:25:48.067 [2024-11-26 18:27:22.269120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.067 [2024-11-26 18:27:22.269254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:48.067 [2024-11-26 18:27:22.269315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:48.067 [2024-11-26 18:27:22.269421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:48.067 [2024-11-26 18:27:22.269467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:48.067 [2024-11-26 18:27:22.271017] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 338.755 ms, result 0 00:25:49.443  [2024-11-26T18:27:24.472Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-26T18:27:25.848Z] Copying: 46/1024 [MB] (22 MBps) [2024-11-26T18:27:26.785Z] Copying: 67/1024 [MB] (21 MBps) [2024-11-26T18:27:27.721Z] Copying: 88/1024 [MB] (21 MBps) [2024-11-26T18:27:28.655Z] Copying: 111/1024 [MB] (22 MBps) [2024-11-26T18:27:29.591Z] Copying: 135/1024 [MB] (24 MBps) [2024-11-26T18:27:30.525Z] Copying: 159/1024 [MB] (23 MBps) [2024-11-26T18:27:31.906Z] Copying: 182/1024 [MB] (23 MBps) [2024-11-26T18:27:32.473Z] Copying: 206/1024 [MB] (23 MBps) [2024-11-26T18:27:33.848Z] Copying: 227/1024 [MB] (21 MBps) [2024-11-26T18:27:34.785Z] Copying: 251/1024 [MB] (23 MBps) [2024-11-26T18:27:35.720Z] Copying: 273/1024 [MB] (22 MBps) [2024-11-26T18:27:36.656Z] Copying: 296/1024 [MB] (22 MBps) [2024-11-26T18:27:37.595Z] Copying: 318/1024 [MB] (22 MBps) [2024-11-26T18:27:38.532Z] Copying: 340/1024 [MB] (21 MBps) [2024-11-26T18:27:39.468Z] Copying: 362/1024 [MB] (21 MBps) [2024-11-26T18:27:40.844Z] Copying: 384/1024 [MB] (22 MBps) [2024-11-26T18:27:41.528Z] Copying: 407/1024 [MB] (22 MBps) [2024-11-26T18:27:42.904Z] Copying: 430/1024 [MB] (23 MBps) [2024-11-26T18:27:43.470Z] Copying: 453/1024 [MB] (22 MBps) [2024-11-26T18:27:44.843Z] Copying: 476/1024 [MB] (23 MBps) [2024-11-26T18:27:45.777Z] Copying: 499/1024 [MB] (23 MBps) [2024-11-26T18:27:46.712Z] Copying: 523/1024 [MB] (23 MBps) [2024-11-26T18:27:47.647Z] Copying: 546/1024 [MB] (22 MBps) [2024-11-26T18:27:48.581Z] Copying: 568/1024 [MB] (22 MBps) [2024-11-26T18:27:49.516Z] Copying: 590/1024 [MB] (21 MBps) [2024-11-26T18:27:50.892Z] Copying: 613/1024 [MB] (22 MBps) [2024-11-26T18:27:51.827Z] Copying: 636/1024 [MB] (23 MBps) [2024-11-26T18:27:52.763Z] Copying: 660/1024 [MB] (23 MBps) [2024-11-26T18:27:53.697Z] Copying: 683/1024 [MB] (23 MBps) [2024-11-26T18:27:54.632Z] Copying: 706/1024 [MB] (22 MBps) [2024-11-26T18:27:55.566Z] Copying: 728/1024 [MB] (21 MBps) [2024-11-26T18:27:56.500Z] Copying: 750/1024 [MB] (22 MBps) [2024-11-26T18:27:57.877Z] Copying: 773/1024 [MB] (22 MBps) [2024-11-26T18:27:58.813Z] Copying: 795/1024 [MB] (21 MBps) [2024-11-26T18:27:59.761Z] Copying: 818/1024 [MB] (22 MBps) [2024-11-26T18:28:00.697Z] Copying: 841/1024 [MB] (23 MBps) [2024-11-26T18:28:01.634Z] Copying: 864/1024 [MB] (22 MBps) [2024-11-26T18:28:02.572Z] Copying: 886/1024 [MB] (22 MBps) [2024-11-26T18:28:03.509Z] Copying: 908/1024 [MB] (22 MBps) [2024-11-26T18:28:04.885Z] Copying: 931/1024 [MB] (22 MBps) [2024-11-26T18:28:05.820Z] Copying: 953/1024 [MB] (22 MBps) [2024-11-26T18:28:06.756Z] Copying: 975/1024 [MB] (22 MBps) [2024-11-26T18:28:07.691Z] Copying: 998/1024 [MB] (22 MBps) [2024-11-26T18:28:07.691Z] Copying: 1020/1024 [MB] (22 MBps) [2024-11-26T18:28:07.950Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-26 18:28:07.826764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.489 [2024-11-26 18:28:07.826872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:33.489 [2024-11-26 18:28:07.826897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:33.489 [2024-11-26 18:28:07.826912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.489 [2024-11-26 18:28:07.826946] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:33.489 [2024-11-26 18:28:07.831302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.489 [2024-11-26 18:28:07.831344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:33.489 [2024-11-26 18:28:07.831376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.331 ms 00:26:33.489 [2024-11-26 18:28:07.831388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.489 [2024-11-26 18:28:07.831648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.489 [2024-11-26 18:28:07.831669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:33.489 [2024-11-26 18:28:07.831682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:26:33.489 [2024-11-26 18:28:07.831694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.489 [2024-11-26 18:28:07.835108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.489 [2024-11-26 18:28:07.835143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:33.489 [2024-11-26 18:28:07.835174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.385 ms 00:26:33.489 [2024-11-26 18:28:07.835198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.489 [2024-11-26 18:28:07.842404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.489 [2024-11-26 18:28:07.842444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:33.489 [2024-11-26 18:28:07.842477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.184 ms 00:26:33.489 [2024-11-26 18:28:07.842488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.489 [2024-11-26 18:28:07.872842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.489 [2024-11-26 18:28:07.872885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:33.489 [2024-11-26 18:28:07.872935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.228 ms 00:26:33.489 [2024-11-26 18:28:07.872947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.489 [2024-11-26 18:28:07.889569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.489 [2024-11-26 18:28:07.889637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:33.489 [2024-11-26 18:28:07.889689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.595 ms 00:26:33.489 [2024-11-26 18:28:07.889701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.489 [2024-11-26 18:28:07.889854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.489 [2024-11-26 18:28:07.889891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:33.489 [2024-11-26 18:28:07.889920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:26:33.489 [2024-11-26 18:28:07.889932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.489 [2024-11-26 18:28:07.920471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.489 [2024-11-26 18:28:07.920515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:33.489 [2024-11-26 18:28:07.920547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.517 ms 00:26:33.489 [2024-11-26 18:28:07.920558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.749 [2024-11-26 18:28:07.949805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.749 [2024-11-26 18:28:07.949847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:33.749 [2024-11-26 18:28:07.949879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.169 ms 00:26:33.749 [2024-11-26 18:28:07.949890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.749 [2024-11-26 18:28:07.977959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.749 [2024-11-26 18:28:07.978000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:33.749 [2024-11-26 18:28:07.978032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.043 ms 00:26:33.749 [2024-11-26 18:28:07.978043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.749 [2024-11-26 18:28:08.005918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.749 [2024-11-26 18:28:08.005977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:33.749 [2024-11-26 18:28:08.006008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.807 ms 00:26:33.749 [2024-11-26 18:28:08.006020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.749 [2024-11-26 18:28:08.006045] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:33.749 [2024-11-26 18:28:08.006087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:33.749 [2024-11-26 18:28:08.006109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:33.749 [2024-11-26 18:28:08.006120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:33.749 [2024-11-26 18:28:08.006132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:33.749 [2024-11-26 18:28:08.006144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:33.749 [2024-11-26 18:28:08.006155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:33.749 [2024-11-26 18:28:08.006167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:33.749 [2024-11-26 18:28:08.006178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:33.749 [2024-11-26 18:28:08.006190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:33.749 [2024-11-26 18:28:08.006202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:33.749 [2024-11-26 18:28:08.006213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:33.749 [2024-11-26 18:28:08.006224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:33.749 [2024-11-26 18:28:08.006235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:33.749 [2024-11-26 18:28:08.006246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.006995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.007022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.007033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.007045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.007057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.007068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.007082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.007093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.007105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.007117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.007130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.007142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.007153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.007165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.007178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.007190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:33.750 [2024-11-26 18:28:08.007201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:33.751 [2024-11-26 18:28:08.007213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:33.751 [2024-11-26 18:28:08.007226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:33.751 [2024-11-26 18:28:08.007237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:33.751 [2024-11-26 18:28:08.007248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:33.751 [2024-11-26 18:28:08.007260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:33.751 [2024-11-26 18:28:08.007271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:33.751 [2024-11-26 18:28:08.007283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:33.751 [2024-11-26 18:28:08.007295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:33.751 [2024-11-26 18:28:08.007306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:33.751 [2024-11-26 18:28:08.007317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:33.751 [2024-11-26 18:28:08.007329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:33.751 [2024-11-26 18:28:08.007340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:33.751 [2024-11-26 18:28:08.007353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:33.751 [2024-11-26 18:28:08.007366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:33.751 [2024-11-26 18:28:08.007378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:33.751 [2024-11-26 18:28:08.007399] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:33.751 [2024-11-26 18:28:08.007411] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c808b541-5662-42d3-a4fb-479cccb27fb1 00:26:33.751 [2024-11-26 18:28:08.007423] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:33.751 [2024-11-26 18:28:08.007436] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:33.751 [2024-11-26 18:28:08.007447] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:33.751 [2024-11-26 18:28:08.007459] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:33.751 [2024-11-26 18:28:08.007484] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:33.751 [2024-11-26 18:28:08.007497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:33.751 [2024-11-26 18:28:08.007508] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:33.751 [2024-11-26 18:28:08.007519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:33.751 [2024-11-26 18:28:08.007529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:33.751 [2024-11-26 18:28:08.007541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.751 [2024-11-26 18:28:08.007552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:33.751 [2024-11-26 18:28:08.007564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.497 ms 00:26:33.751 [2024-11-26 18:28:08.007600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.751 [2024-11-26 18:28:08.023866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.751 [2024-11-26 18:28:08.023906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:33.751 [2024-11-26 18:28:08.023923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.213 ms 00:26:33.751 [2024-11-26 18:28:08.023934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.751 [2024-11-26 18:28:08.024399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.751 [2024-11-26 18:28:08.024420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:33.751 [2024-11-26 18:28:08.024444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:26:33.751 [2024-11-26 18:28:08.024455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.751 [2024-11-26 18:28:08.066395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.751 [2024-11-26 18:28:08.066444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:33.751 [2024-11-26 18:28:08.066477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.751 [2024-11-26 18:28:08.066496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.751 [2024-11-26 18:28:08.066607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.751 [2024-11-26 18:28:08.066627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:33.751 [2024-11-26 18:28:08.066649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.751 [2024-11-26 18:28:08.066661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.751 [2024-11-26 18:28:08.066773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.751 [2024-11-26 18:28:08.066796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:33.751 [2024-11-26 18:28:08.066810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.751 [2024-11-26 18:28:08.066822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.751 [2024-11-26 18:28:08.066847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.751 [2024-11-26 18:28:08.066862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:33.751 [2024-11-26 18:28:08.066874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.751 [2024-11-26 18:28:08.066909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.751 [2024-11-26 18:28:08.167643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:33.751 [2024-11-26 18:28:08.167715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:33.751 [2024-11-26 18:28:08.167752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:33.751 [2024-11-26 18:28:08.167764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.010 [2024-11-26 18:28:08.247039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.010 [2024-11-26 18:28:08.247320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:34.010 [2024-11-26 18:28:08.247358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.010 [2024-11-26 18:28:08.247372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.010 [2024-11-26 18:28:08.247487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.010 [2024-11-26 18:28:08.247506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:34.010 [2024-11-26 18:28:08.247520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.010 [2024-11-26 18:28:08.247532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.010 [2024-11-26 18:28:08.247645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.010 [2024-11-26 18:28:08.247682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:34.010 [2024-11-26 18:28:08.247696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.010 [2024-11-26 18:28:08.247709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.010 [2024-11-26 18:28:08.247867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.010 [2024-11-26 18:28:08.247887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:34.010 [2024-11-26 18:28:08.247918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.010 [2024-11-26 18:28:08.247930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.010 [2024-11-26 18:28:08.247983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.010 [2024-11-26 18:28:08.248031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:34.010 [2024-11-26 18:28:08.248044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.010 [2024-11-26 18:28:08.248055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.010 [2024-11-26 18:28:08.248109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.010 [2024-11-26 18:28:08.248125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:34.010 [2024-11-26 18:28:08.248138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.010 [2024-11-26 18:28:08.248149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.010 [2024-11-26 18:28:08.248201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:34.010 [2024-11-26 18:28:08.248218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:34.010 [2024-11-26 18:28:08.248230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:34.010 [2024-11-26 18:28:08.248242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:34.010 [2024-11-26 18:28:08.248405] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 421.613 ms, result 0 00:26:34.947 00:26:34.947 00:26:34.947 18:28:09 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:36.848 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:36.848 18:28:11 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:26:36.848 [2024-11-26 18:28:11.280603] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:26:36.848 [2024-11-26 18:28:11.280788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80445 ] 00:26:37.107 [2024-11-26 18:28:11.471107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.365 [2024-11-26 18:28:11.610273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.625 [2024-11-26 18:28:11.965158] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:37.625 [2024-11-26 18:28:11.965475] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:37.886 [2024-11-26 18:28:12.127866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.886 [2024-11-26 18:28:12.127920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:37.886 [2024-11-26 18:28:12.127958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:37.886 [2024-11-26 18:28:12.127969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.886 [2024-11-26 18:28:12.128030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.886 [2024-11-26 18:28:12.128051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:37.886 [2024-11-26 18:28:12.128064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:37.886 [2024-11-26 18:28:12.128074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.886 [2024-11-26 18:28:12.128104] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:37.886 [2024-11-26 18:28:12.129126] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:37.886 [2024-11-26 18:28:12.129165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.886 [2024-11-26 18:28:12.129180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:37.886 [2024-11-26 18:28:12.129193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.068 ms 00:26:37.886 [2024-11-26 18:28:12.129204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.886 [2024-11-26 18:28:12.131465] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:37.886 [2024-11-26 18:28:12.146798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.886 [2024-11-26 18:28:12.147056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:37.886 [2024-11-26 18:28:12.147085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.334 ms 00:26:37.886 [2024-11-26 18:28:12.147098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.886 [2024-11-26 18:28:12.147175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.886 [2024-11-26 18:28:12.147194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:37.886 [2024-11-26 18:28:12.147206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:37.886 [2024-11-26 18:28:12.147217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.886 [2024-11-26 18:28:12.156225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.886 [2024-11-26 18:28:12.156266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:37.886 [2024-11-26 18:28:12.156297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.918 ms 00:26:37.886 [2024-11-26 18:28:12.156314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.886 [2024-11-26 18:28:12.156404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.886 [2024-11-26 18:28:12.156423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:37.886 [2024-11-26 18:28:12.156435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:37.886 [2024-11-26 18:28:12.156445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.886 [2024-11-26 18:28:12.156503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.886 [2024-11-26 18:28:12.156520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:37.886 [2024-11-26 18:28:12.156532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:37.886 [2024-11-26 18:28:12.156543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.886 [2024-11-26 18:28:12.156641] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:37.886 [2024-11-26 18:28:12.161289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.886 [2024-11-26 18:28:12.161326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:37.886 [2024-11-26 18:28:12.161362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.673 ms 00:26:37.886 [2024-11-26 18:28:12.161373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.886 [2024-11-26 18:28:12.161416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.886 [2024-11-26 18:28:12.161432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:37.886 [2024-11-26 18:28:12.161443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:37.886 [2024-11-26 18:28:12.161453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.886 [2024-11-26 18:28:12.161518] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:37.886 [2024-11-26 18:28:12.161549] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:37.886 [2024-11-26 18:28:12.161625] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:37.886 [2024-11-26 18:28:12.161654] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:37.886 [2024-11-26 18:28:12.161769] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:37.886 [2024-11-26 18:28:12.161785] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:37.886 [2024-11-26 18:28:12.161800] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:37.886 [2024-11-26 18:28:12.161815] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:37.886 [2024-11-26 18:28:12.161829] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:37.886 [2024-11-26 18:28:12.161840] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:37.887 [2024-11-26 18:28:12.161851] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:37.887 [2024-11-26 18:28:12.161866] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:37.887 [2024-11-26 18:28:12.161877] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:37.887 [2024-11-26 18:28:12.161889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.887 [2024-11-26 18:28:12.161899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:37.887 [2024-11-26 18:28:12.161911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:26:37.887 [2024-11-26 18:28:12.161921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.887 [2024-11-26 18:28:12.162044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.887 [2024-11-26 18:28:12.162060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:37.887 [2024-11-26 18:28:12.162071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:26:37.887 [2024-11-26 18:28:12.162081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.887 [2024-11-26 18:28:12.162194] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:37.887 [2024-11-26 18:28:12.162214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:37.887 [2024-11-26 18:28:12.162226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:37.887 [2024-11-26 18:28:12.162237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.887 [2024-11-26 18:28:12.162247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:37.887 [2024-11-26 18:28:12.162257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:37.887 [2024-11-26 18:28:12.162267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:37.887 [2024-11-26 18:28:12.162277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:37.887 [2024-11-26 18:28:12.162286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:37.887 [2024-11-26 18:28:12.162296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:37.887 [2024-11-26 18:28:12.162307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:37.887 [2024-11-26 18:28:12.162318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:37.887 [2024-11-26 18:28:12.162328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:37.887 [2024-11-26 18:28:12.162351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:37.887 [2024-11-26 18:28:12.162361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:37.887 [2024-11-26 18:28:12.162371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.887 [2024-11-26 18:28:12.162381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:37.887 [2024-11-26 18:28:12.162391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:37.887 [2024-11-26 18:28:12.162401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.887 [2024-11-26 18:28:12.162411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:37.887 [2024-11-26 18:28:12.162421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:37.887 [2024-11-26 18:28:12.162430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.887 [2024-11-26 18:28:12.162440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:37.887 [2024-11-26 18:28:12.162450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:37.887 [2024-11-26 18:28:12.162459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.887 [2024-11-26 18:28:12.162469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:37.887 [2024-11-26 18:28:12.162478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:37.887 [2024-11-26 18:28:12.162487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.887 [2024-11-26 18:28:12.162496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:37.887 [2024-11-26 18:28:12.162506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:37.887 [2024-11-26 18:28:12.162516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:37.887 [2024-11-26 18:28:12.162526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:37.887 [2024-11-26 18:28:12.162535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:37.887 [2024-11-26 18:28:12.162544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:37.887 [2024-11-26 18:28:12.162553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:37.887 [2024-11-26 18:28:12.162608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:37.887 [2024-11-26 18:28:12.162620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:37.887 [2024-11-26 18:28:12.162631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:37.887 [2024-11-26 18:28:12.162642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:37.887 [2024-11-26 18:28:12.162652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.887 [2024-11-26 18:28:12.162662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:37.887 [2024-11-26 18:28:12.162678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:37.887 [2024-11-26 18:28:12.162690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.887 [2024-11-26 18:28:12.162701] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:37.887 [2024-11-26 18:28:12.162722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:37.887 [2024-11-26 18:28:12.162742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:37.887 [2024-11-26 18:28:12.162754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:37.887 [2024-11-26 18:28:12.162765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:37.887 [2024-11-26 18:28:12.162776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:37.887 [2024-11-26 18:28:12.162787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:37.887 [2024-11-26 18:28:12.162797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:37.887 [2024-11-26 18:28:12.162807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:37.887 [2024-11-26 18:28:12.162817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:37.887 [2024-11-26 18:28:12.162830] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:37.887 [2024-11-26 18:28:12.162843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:37.887 [2024-11-26 18:28:12.162877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:37.887 [2024-11-26 18:28:12.162903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:37.887 [2024-11-26 18:28:12.162913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:37.887 [2024-11-26 18:28:12.162928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:37.887 [2024-11-26 18:28:12.162939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:37.887 [2024-11-26 18:28:12.162949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:37.887 [2024-11-26 18:28:12.162959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:37.887 [2024-11-26 18:28:12.162979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:37.887 [2024-11-26 18:28:12.162989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:37.887 [2024-11-26 18:28:12.162999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:37.887 [2024-11-26 18:28:12.163010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:37.888 [2024-11-26 18:28:12.163019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:37.888 [2024-11-26 18:28:12.163030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:37.888 [2024-11-26 18:28:12.163040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:37.888 [2024-11-26 18:28:12.163051] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:37.888 [2024-11-26 18:28:12.163062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:37.888 [2024-11-26 18:28:12.163074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:37.888 [2024-11-26 18:28:12.163084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:37.888 [2024-11-26 18:28:12.163095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:37.888 [2024-11-26 18:28:12.163107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:37.888 [2024-11-26 18:28:12.163119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.888 [2024-11-26 18:28:12.163131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:37.888 [2024-11-26 18:28:12.163143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:26:37.888 [2024-11-26 18:28:12.163154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.888 [2024-11-26 18:28:12.200461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.888 [2024-11-26 18:28:12.200526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:37.888 [2024-11-26 18:28:12.200579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.234 ms 00:26:37.888 [2024-11-26 18:28:12.200633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.888 [2024-11-26 18:28:12.200784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.888 [2024-11-26 18:28:12.200810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:37.888 [2024-11-26 18:28:12.200824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:26:37.888 [2024-11-26 18:28:12.200835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.888 [2024-11-26 18:28:12.251385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.888 [2024-11-26 18:28:12.251438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:37.888 [2024-11-26 18:28:12.251476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.431 ms 00:26:37.888 [2024-11-26 18:28:12.251487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.888 [2024-11-26 18:28:12.251544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.888 [2024-11-26 18:28:12.251562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:37.888 [2024-11-26 18:28:12.251619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:37.888 [2024-11-26 18:28:12.251631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.888 [2024-11-26 18:28:12.252335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.888 [2024-11-26 18:28:12.252361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:37.888 [2024-11-26 18:28:12.252375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:26:37.888 [2024-11-26 18:28:12.252386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.888 [2024-11-26 18:28:12.252562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.888 [2024-11-26 18:28:12.252599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:37.888 [2024-11-26 18:28:12.252618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:26:37.888 [2024-11-26 18:28:12.252630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.888 [2024-11-26 18:28:12.270748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.888 [2024-11-26 18:28:12.270793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:37.888 [2024-11-26 18:28:12.270827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.058 ms 00:26:37.888 [2024-11-26 18:28:12.270839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.888 [2024-11-26 18:28:12.285976] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:37.888 [2024-11-26 18:28:12.286018] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:37.888 [2024-11-26 18:28:12.286052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.888 [2024-11-26 18:28:12.286063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:37.888 [2024-11-26 18:28:12.286075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.059 ms 00:26:37.888 [2024-11-26 18:28:12.286085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.888 [2024-11-26 18:28:12.312007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.888 [2024-11-26 18:28:12.312050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:37.888 [2024-11-26 18:28:12.312083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.877 ms 00:26:37.888 [2024-11-26 18:28:12.312093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.888 [2024-11-26 18:28:12.325983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.888 [2024-11-26 18:28:12.326025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:37.888 [2024-11-26 18:28:12.326056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.829 ms 00:26:37.888 [2024-11-26 18:28:12.326067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.888 [2024-11-26 18:28:12.339537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.888 [2024-11-26 18:28:12.339610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:37.888 [2024-11-26 18:28:12.339644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.428 ms 00:26:37.888 [2024-11-26 18:28:12.339654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:37.888 [2024-11-26 18:28:12.340437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:37.888 [2024-11-26 18:28:12.340464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:37.888 [2024-11-26 18:28:12.340483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:26:37.888 [2024-11-26 18:28:12.340494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.148 [2024-11-26 18:28:12.411172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.148 [2024-11-26 18:28:12.411570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:38.148 [2024-11-26 18:28:12.411611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.652 ms 00:26:38.148 [2024-11-26 18:28:12.411626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.148 [2024-11-26 18:28:12.422551] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:38.148 [2024-11-26 18:28:12.424979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.148 [2024-11-26 18:28:12.425167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:38.148 [2024-11-26 18:28:12.425195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.284 ms 00:26:38.148 [2024-11-26 18:28:12.425208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.148 [2024-11-26 18:28:12.425327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.148 [2024-11-26 18:28:12.425347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:38.148 [2024-11-26 18:28:12.425365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:38.148 [2024-11-26 18:28:12.425376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.148 [2024-11-26 18:28:12.425476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.148 [2024-11-26 18:28:12.425510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:38.148 [2024-11-26 18:28:12.425522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:38.148 [2024-11-26 18:28:12.425534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.148 [2024-11-26 18:28:12.425567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.148 [2024-11-26 18:28:12.425602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:38.148 [2024-11-26 18:28:12.425636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:38.148 [2024-11-26 18:28:12.425648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.148 [2024-11-26 18:28:12.425721] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:38.148 [2024-11-26 18:28:12.425755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.148 [2024-11-26 18:28:12.425767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:38.148 [2024-11-26 18:28:12.425779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:38.148 [2024-11-26 18:28:12.425789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.148 [2024-11-26 18:28:12.453509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.148 [2024-11-26 18:28:12.453583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:38.148 [2024-11-26 18:28:12.453624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.690 ms 00:26:38.148 [2024-11-26 18:28:12.453636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.148 [2024-11-26 18:28:12.453750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.148 [2024-11-26 18:28:12.453769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:38.148 [2024-11-26 18:28:12.453782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:26:38.148 [2024-11-26 18:28:12.453793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.148 [2024-11-26 18:28:12.455325] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 326.894 ms, result 0 00:26:39.084  [2024-11-26T18:28:14.482Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-26T18:28:15.912Z] Copying: 45/1024 [MB] (22 MBps) [2024-11-26T18:28:16.479Z] Copying: 69/1024 [MB] (23 MBps) [2024-11-26T18:28:17.857Z] Copying: 92/1024 [MB] (23 MBps) [2024-11-26T18:28:18.794Z] Copying: 116/1024 [MB] (24 MBps) [2024-11-26T18:28:19.729Z] Copying: 139/1024 [MB] (22 MBps) [2024-11-26T18:28:20.664Z] Copying: 162/1024 [MB] (22 MBps) [2024-11-26T18:28:21.601Z] Copying: 184/1024 [MB] (22 MBps) [2024-11-26T18:28:22.536Z] Copying: 207/1024 [MB] (22 MBps) [2024-11-26T18:28:23.473Z] Copying: 231/1024 [MB] (23 MBps) [2024-11-26T18:28:24.849Z] Copying: 254/1024 [MB] (22 MBps) [2024-11-26T18:28:25.785Z] Copying: 276/1024 [MB] (22 MBps) [2024-11-26T18:28:26.723Z] Copying: 299/1024 [MB] (22 MBps) [2024-11-26T18:28:27.764Z] Copying: 322/1024 [MB] (22 MBps) [2024-11-26T18:28:28.699Z] Copying: 345/1024 [MB] (23 MBps) [2024-11-26T18:28:29.636Z] Copying: 367/1024 [MB] (22 MBps) [2024-11-26T18:28:30.572Z] Copying: 389/1024 [MB] (22 MBps) [2024-11-26T18:28:31.507Z] Copying: 412/1024 [MB] (23 MBps) [2024-11-26T18:28:32.884Z] Copying: 435/1024 [MB] (22 MBps) [2024-11-26T18:28:33.820Z] Copying: 458/1024 [MB] (22 MBps) [2024-11-26T18:28:34.756Z] Copying: 481/1024 [MB] (23 MBps) [2024-11-26T18:28:35.692Z] Copying: 505/1024 [MB] (23 MBps) [2024-11-26T18:28:36.627Z] Copying: 528/1024 [MB] (23 MBps) [2024-11-26T18:28:37.563Z] Copying: 551/1024 [MB] (22 MBps) [2024-11-26T18:28:38.523Z] Copying: 573/1024 [MB] (22 MBps) [2024-11-26T18:28:39.913Z] Copying: 596/1024 [MB] (23 MBps) [2024-11-26T18:28:40.480Z] Copying: 620/1024 [MB] (23 MBps) [2024-11-26T18:28:41.858Z] Copying: 643/1024 [MB] (22 MBps) [2024-11-26T18:28:42.795Z] Copying: 666/1024 [MB] (22 MBps) [2024-11-26T18:28:43.732Z] Copying: 688/1024 [MB] (22 MBps) [2024-11-26T18:28:44.669Z] Copying: 711/1024 [MB] (22 MBps) [2024-11-26T18:28:45.607Z] Copying: 734/1024 [MB] (22 MBps) [2024-11-26T18:28:46.546Z] Copying: 757/1024 [MB] (23 MBps) [2024-11-26T18:28:47.483Z] Copying: 780/1024 [MB] (22 MBps) [2024-11-26T18:28:48.861Z] Copying: 802/1024 [MB] (22 MBps) [2024-11-26T18:28:49.797Z] Copying: 825/1024 [MB] (22 MBps) [2024-11-26T18:28:50.755Z] Copying: 848/1024 [MB] (22 MBps) [2024-11-26T18:28:51.689Z] Copying: 870/1024 [MB] (22 MBps) [2024-11-26T18:28:52.622Z] Copying: 893/1024 [MB] (22 MBps) [2024-11-26T18:28:53.555Z] Copying: 916/1024 [MB] (22 MBps) [2024-11-26T18:28:54.488Z] Copying: 939/1024 [MB] (23 MBps) [2024-11-26T18:28:55.864Z] Copying: 961/1024 [MB] (22 MBps) [2024-11-26T18:28:56.799Z] Copying: 984/1024 [MB] (22 MBps) [2024-11-26T18:28:57.736Z] Copying: 1007/1024 [MB] (22 MBps) [2024-11-26T18:28:58.303Z] Copying: 1023/1024 [MB] (15 MBps) [2024-11-26T18:28:58.303Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-26 18:28:58.228672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.842 [2024-11-26 18:28:58.228790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:23.842 [2024-11-26 18:28:58.228843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:23.842 [2024-11-26 18:28:58.228857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.842 [2024-11-26 18:28:58.230991] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:23.842 [2024-11-26 18:28:58.236710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.842 [2024-11-26 18:28:58.236915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:23.842 [2024-11-26 18:28:58.236942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.635 ms 00:27:23.842 [2024-11-26 18:28:58.236956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.842 [2024-11-26 18:28:58.248803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.842 [2024-11-26 18:28:58.248846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:23.842 [2024-11-26 18:28:58.248879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.841 ms 00:27:23.842 [2024-11-26 18:28:58.248899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.842 [2024-11-26 18:28:58.270841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.842 [2024-11-26 18:28:58.270919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:23.842 [2024-11-26 18:28:58.270952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.920 ms 00:27:23.842 [2024-11-26 18:28:58.270963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:23.842 [2024-11-26 18:28:58.276238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:23.842 [2024-11-26 18:28:58.276273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:23.842 [2024-11-26 18:28:58.276303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.238 ms 00:27:23.842 [2024-11-26 18:28:58.276322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.102 [2024-11-26 18:28:58.302933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.102 [2024-11-26 18:28:58.303133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:24.102 [2024-11-26 18:28:58.303256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.549 ms 00:27:24.102 [2024-11-26 18:28:58.303279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.102 [2024-11-26 18:28:58.318648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.102 [2024-11-26 18:28:58.318723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:24.102 [2024-11-26 18:28:58.318757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.309 ms 00:27:24.103 [2024-11-26 18:28:58.318785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.103 [2024-11-26 18:28:58.429518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.103 [2024-11-26 18:28:58.429628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:24.103 [2024-11-26 18:28:58.429664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.685 ms 00:27:24.103 [2024-11-26 18:28:58.429706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.103 [2024-11-26 18:28:58.456608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.103 [2024-11-26 18:28:58.456649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:24.103 [2024-11-26 18:28:58.456681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.863 ms 00:27:24.103 [2024-11-26 18:28:58.456691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.103 [2024-11-26 18:28:58.482340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.103 [2024-11-26 18:28:58.482551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:24.103 [2024-11-26 18:28:58.482622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.608 ms 00:27:24.103 [2024-11-26 18:28:58.482635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.103 [2024-11-26 18:28:58.508959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.103 [2024-11-26 18:28:58.509001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:24.103 [2024-11-26 18:28:58.509033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.281 ms 00:27:24.103 [2024-11-26 18:28:58.509044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.103 [2024-11-26 18:28:58.536243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.103 [2024-11-26 18:28:58.536301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:24.103 [2024-11-26 18:28:58.536333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.116 ms 00:27:24.103 [2024-11-26 18:28:58.536343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.103 [2024-11-26 18:28:58.536384] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:24.103 [2024-11-26 18:28:58.536407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 116736 / 261120 wr_cnt: 1 state: open 00:27:24.103 [2024-11-26 18:28:58.536421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.536992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:24.103 [2024-11-26 18:28:58.537326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:24.104 [2024-11-26 18:28:58.537764] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:24.104 [2024-11-26 18:28:58.537776] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c808b541-5662-42d3-a4fb-479cccb27fb1 00:27:24.104 [2024-11-26 18:28:58.537788] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 116736 00:27:24.104 [2024-11-26 18:28:58.537799] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 117696 00:27:24.104 [2024-11-26 18:28:58.537810] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 116736 00:27:24.104 [2024-11-26 18:28:58.537822] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:27:24.104 [2024-11-26 18:28:58.537850] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:24.104 [2024-11-26 18:28:58.537862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:24.104 [2024-11-26 18:28:58.537873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:24.104 [2024-11-26 18:28:58.537900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:24.104 [2024-11-26 18:28:58.537910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:24.104 [2024-11-26 18:28:58.537922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.104 [2024-11-26 18:28:58.537933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:24.104 [2024-11-26 18:28:58.537946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.539 ms 00:27:24.104 [2024-11-26 18:28:58.537957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.104 [2024-11-26 18:28:58.553625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.104 [2024-11-26 18:28:58.553665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:24.104 [2024-11-26 18:28:58.553689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.624 ms 00:27:24.104 [2024-11-26 18:28:58.553701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.104 [2024-11-26 18:28:58.554159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:24.104 [2024-11-26 18:28:58.554180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:24.104 [2024-11-26 18:28:58.554194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:27:24.104 [2024-11-26 18:28:58.554205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.362 [2024-11-26 18:28:58.592457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:24.363 [2024-11-26 18:28:58.592505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:24.363 [2024-11-26 18:28:58.592537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:24.363 [2024-11-26 18:28:58.592548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.363 [2024-11-26 18:28:58.592645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:24.363 [2024-11-26 18:28:58.592663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:24.363 [2024-11-26 18:28:58.592675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:24.363 [2024-11-26 18:28:58.592684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.363 [2024-11-26 18:28:58.592806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:24.363 [2024-11-26 18:28:58.592831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:24.363 [2024-11-26 18:28:58.592843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:24.363 [2024-11-26 18:28:58.592853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.363 [2024-11-26 18:28:58.592875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:24.363 [2024-11-26 18:28:58.592889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:24.363 [2024-11-26 18:28:58.592900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:24.363 [2024-11-26 18:28:58.592911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.363 [2024-11-26 18:28:58.681089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:24.363 [2024-11-26 18:28:58.681176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:24.363 [2024-11-26 18:28:58.681213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:24.363 [2024-11-26 18:28:58.681223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.363 [2024-11-26 18:28:58.759114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:24.363 [2024-11-26 18:28:58.759333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:24.363 [2024-11-26 18:28:58.759457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:24.363 [2024-11-26 18:28:58.759481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.363 [2024-11-26 18:28:58.759615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:24.363 [2024-11-26 18:28:58.759637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:24.363 [2024-11-26 18:28:58.759651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:24.363 [2024-11-26 18:28:58.759672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.363 [2024-11-26 18:28:58.759759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:24.363 [2024-11-26 18:28:58.759777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:24.363 [2024-11-26 18:28:58.759789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:24.363 [2024-11-26 18:28:58.759801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.363 [2024-11-26 18:28:58.759947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:24.363 [2024-11-26 18:28:58.759997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:24.363 [2024-11-26 18:28:58.760023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:24.363 [2024-11-26 18:28:58.760041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.363 [2024-11-26 18:28:58.760104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:24.363 [2024-11-26 18:28:58.760121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:24.363 [2024-11-26 18:28:58.760132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:24.363 [2024-11-26 18:28:58.760143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.363 [2024-11-26 18:28:58.760186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:24.363 [2024-11-26 18:28:58.760200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:24.363 [2024-11-26 18:28:58.760210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:24.363 [2024-11-26 18:28:58.760221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.363 [2024-11-26 18:28:58.760277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:24.363 [2024-11-26 18:28:58.760294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:24.363 [2024-11-26 18:28:58.760305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:24.363 [2024-11-26 18:28:58.760315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:24.363 [2024-11-26 18:28:58.760492] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 533.077 ms, result 0 00:27:25.737 00:27:25.737 00:27:25.737 18:29:00 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:27:25.996 [2024-11-26 18:29:00.240437] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:27:25.996 [2024-11-26 18:29:00.240699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80922 ] 00:27:25.996 [2024-11-26 18:29:00.423953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.255 [2024-11-26 18:29:00.534864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.514 [2024-11-26 18:29:00.883172] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:26.514 [2024-11-26 18:29:00.883248] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:26.774 [2024-11-26 18:29:01.043867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.774 [2024-11-26 18:29:01.043943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:26.774 [2024-11-26 18:29:01.043993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:26.774 [2024-11-26 18:29:01.044004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.774 [2024-11-26 18:29:01.044064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.774 [2024-11-26 18:29:01.044085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:26.774 [2024-11-26 18:29:01.044097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:26.774 [2024-11-26 18:29:01.044106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.774 [2024-11-26 18:29:01.044134] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:26.774 [2024-11-26 18:29:01.045015] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:26.774 [2024-11-26 18:29:01.045071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.774 [2024-11-26 18:29:01.045084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:26.774 [2024-11-26 18:29:01.045096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:27:26.774 [2024-11-26 18:29:01.045107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.774 [2024-11-26 18:29:01.047332] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:26.774 [2024-11-26 18:29:01.061692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.774 [2024-11-26 18:29:01.061735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:26.774 [2024-11-26 18:29:01.061768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.362 ms 00:27:26.774 [2024-11-26 18:29:01.061778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.774 [2024-11-26 18:29:01.061849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.774 [2024-11-26 18:29:01.061867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:26.774 [2024-11-26 18:29:01.061878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:27:26.774 [2024-11-26 18:29:01.061888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.774 [2024-11-26 18:29:01.071024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.774 [2024-11-26 18:29:01.071260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:26.774 [2024-11-26 18:29:01.071288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.052 ms 00:27:26.774 [2024-11-26 18:29:01.071309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.774 [2024-11-26 18:29:01.071403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.774 [2024-11-26 18:29:01.071421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:26.774 [2024-11-26 18:29:01.071433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:27:26.774 [2024-11-26 18:29:01.071443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.774 [2024-11-26 18:29:01.071500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.774 [2024-11-26 18:29:01.071522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:26.774 [2024-11-26 18:29:01.071533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:26.774 [2024-11-26 18:29:01.071544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.774 [2024-11-26 18:29:01.071665] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:26.774 [2024-11-26 18:29:01.076229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.774 [2024-11-26 18:29:01.076266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:26.774 [2024-11-26 18:29:01.076301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.573 ms 00:27:26.774 [2024-11-26 18:29:01.076312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.774 [2024-11-26 18:29:01.076369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.774 [2024-11-26 18:29:01.076384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:26.774 [2024-11-26 18:29:01.076395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:26.774 [2024-11-26 18:29:01.076405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.774 [2024-11-26 18:29:01.076475] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:26.774 [2024-11-26 18:29:01.076505] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:26.774 [2024-11-26 18:29:01.076542] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:26.774 [2024-11-26 18:29:01.076564] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:26.774 [2024-11-26 18:29:01.076696] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:26.774 [2024-11-26 18:29:01.076731] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:26.774 [2024-11-26 18:29:01.076745] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:26.774 [2024-11-26 18:29:01.076759] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:26.774 [2024-11-26 18:29:01.076772] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:26.774 [2024-11-26 18:29:01.076783] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:26.774 [2024-11-26 18:29:01.076796] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:26.774 [2024-11-26 18:29:01.076812] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:26.774 [2024-11-26 18:29:01.076823] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:26.774 [2024-11-26 18:29:01.076834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.774 [2024-11-26 18:29:01.076844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:26.774 [2024-11-26 18:29:01.076854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:27:26.774 [2024-11-26 18:29:01.076871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.774 [2024-11-26 18:29:01.076974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.774 [2024-11-26 18:29:01.077005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:26.774 [2024-11-26 18:29:01.077016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:27:26.775 [2024-11-26 18:29:01.077041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.775 [2024-11-26 18:29:01.077155] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:26.775 [2024-11-26 18:29:01.077180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:26.775 [2024-11-26 18:29:01.077192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:26.775 [2024-11-26 18:29:01.077203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.775 [2024-11-26 18:29:01.077214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:26.775 [2024-11-26 18:29:01.077223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:26.775 [2024-11-26 18:29:01.077233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:26.775 [2024-11-26 18:29:01.077243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:26.775 [2024-11-26 18:29:01.077253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:26.775 [2024-11-26 18:29:01.077262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:26.775 [2024-11-26 18:29:01.077273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:26.775 [2024-11-26 18:29:01.077283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:26.775 [2024-11-26 18:29:01.077293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:26.775 [2024-11-26 18:29:01.077317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:26.775 [2024-11-26 18:29:01.077328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:26.775 [2024-11-26 18:29:01.077337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.775 [2024-11-26 18:29:01.077347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:26.775 [2024-11-26 18:29:01.077357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:26.775 [2024-11-26 18:29:01.077367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.775 [2024-11-26 18:29:01.077376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:26.775 [2024-11-26 18:29:01.077386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:26.775 [2024-11-26 18:29:01.077395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:26.775 [2024-11-26 18:29:01.077405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:26.775 [2024-11-26 18:29:01.077414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:26.775 [2024-11-26 18:29:01.077424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:26.775 [2024-11-26 18:29:01.077433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:26.775 [2024-11-26 18:29:01.077457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:26.775 [2024-11-26 18:29:01.077466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:26.775 [2024-11-26 18:29:01.077486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:26.775 [2024-11-26 18:29:01.077495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:26.775 [2024-11-26 18:29:01.077504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:26.775 [2024-11-26 18:29:01.077513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:26.775 [2024-11-26 18:29:01.077522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:26.775 [2024-11-26 18:29:01.077532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:26.775 [2024-11-26 18:29:01.077541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:26.775 [2024-11-26 18:29:01.077551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:26.775 [2024-11-26 18:29:01.077560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:26.775 [2024-11-26 18:29:01.077571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:26.775 [2024-11-26 18:29:01.077581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:26.775 [2024-11-26 18:29:01.077590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.775 [2024-11-26 18:29:01.077599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:26.775 [2024-11-26 18:29:01.077608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:26.775 [2024-11-26 18:29:01.077619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.775 [2024-11-26 18:29:01.077629] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:26.775 [2024-11-26 18:29:01.077640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:26.775 [2024-11-26 18:29:01.077896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:26.775 [2024-11-26 18:29:01.077936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:26.775 [2024-11-26 18:29:01.077971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:26.775 [2024-11-26 18:29:01.078083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:26.775 [2024-11-26 18:29:01.078130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:26.775 [2024-11-26 18:29:01.078167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:26.775 [2024-11-26 18:29:01.078200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:26.775 [2024-11-26 18:29:01.078338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:26.775 [2024-11-26 18:29:01.078363] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:26.775 [2024-11-26 18:29:01.078379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:26.775 [2024-11-26 18:29:01.078400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:26.775 [2024-11-26 18:29:01.078411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:26.775 [2024-11-26 18:29:01.078421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:26.775 [2024-11-26 18:29:01.078431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:26.775 [2024-11-26 18:29:01.078442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:26.775 [2024-11-26 18:29:01.078451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:26.775 [2024-11-26 18:29:01.078465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:26.775 [2024-11-26 18:29:01.078476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:26.775 [2024-11-26 18:29:01.078486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:26.775 [2024-11-26 18:29:01.078496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:26.775 [2024-11-26 18:29:01.078506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:26.775 [2024-11-26 18:29:01.078516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:26.775 [2024-11-26 18:29:01.078526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:26.775 [2024-11-26 18:29:01.078537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:26.775 [2024-11-26 18:29:01.078547] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:26.775 [2024-11-26 18:29:01.078850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:26.775 [2024-11-26 18:29:01.078912] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:26.775 [2024-11-26 18:29:01.078929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:26.775 [2024-11-26 18:29:01.078955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:26.775 [2024-11-26 18:29:01.078968] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:26.775 [2024-11-26 18:29:01.078996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.775 [2024-11-26 18:29:01.079007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:26.775 [2024-11-26 18:29:01.079019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.903 ms 00:27:26.776 [2024-11-26 18:29:01.079030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.776 [2024-11-26 18:29:01.116359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.776 [2024-11-26 18:29:01.116418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:26.776 [2024-11-26 18:29:01.116457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.251 ms 00:27:26.776 [2024-11-26 18:29:01.116478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.776 [2024-11-26 18:29:01.116628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.776 [2024-11-26 18:29:01.116646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:26.776 [2024-11-26 18:29:01.116659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:27:26.776 [2024-11-26 18:29:01.116669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.776 [2024-11-26 18:29:01.164774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.776 [2024-11-26 18:29:01.164831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:26.776 [2024-11-26 18:29:01.164848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.994 ms 00:27:26.776 [2024-11-26 18:29:01.164859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.776 [2024-11-26 18:29:01.164917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.776 [2024-11-26 18:29:01.164933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:26.776 [2024-11-26 18:29:01.164950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:26.776 [2024-11-26 18:29:01.164959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.776 [2024-11-26 18:29:01.165646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.776 [2024-11-26 18:29:01.165667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:26.776 [2024-11-26 18:29:01.165695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:27:26.776 [2024-11-26 18:29:01.165706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.776 [2024-11-26 18:29:01.165875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.776 [2024-11-26 18:29:01.165909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:26.776 [2024-11-26 18:29:01.165928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:27:26.776 [2024-11-26 18:29:01.165939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.776 [2024-11-26 18:29:01.183363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.776 [2024-11-26 18:29:01.183615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:26.776 [2024-11-26 18:29:01.183644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.398 ms 00:27:26.776 [2024-11-26 18:29:01.183657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.776 [2024-11-26 18:29:01.197902] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:26.776 [2024-11-26 18:29:01.198099] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:26.776 [2024-11-26 18:29:01.198123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.776 [2024-11-26 18:29:01.198135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:26.776 [2024-11-26 18:29:01.198148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.322 ms 00:27:26.776 [2024-11-26 18:29:01.198159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:26.776 [2024-11-26 18:29:01.222343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:26.776 [2024-11-26 18:29:01.222386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:26.776 [2024-11-26 18:29:01.222418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.138 ms 00:27:26.776 [2024-11-26 18:29:01.222429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.035 [2024-11-26 18:29:01.235250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.035 [2024-11-26 18:29:01.235295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:27.035 [2024-11-26 18:29:01.235327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.776 ms 00:27:27.035 [2024-11-26 18:29:01.235337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.035 [2024-11-26 18:29:01.247992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.035 [2024-11-26 18:29:01.248048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:27.035 [2024-11-26 18:29:01.248080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.614 ms 00:27:27.035 [2024-11-26 18:29:01.248090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.035 [2024-11-26 18:29:01.248819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.035 [2024-11-26 18:29:01.248841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:27.035 [2024-11-26 18:29:01.248858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:27:27.035 [2024-11-26 18:29:01.248869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.035 [2024-11-26 18:29:01.318095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.035 [2024-11-26 18:29:01.318182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:27.035 [2024-11-26 18:29:01.318228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.186 ms 00:27:27.035 [2024-11-26 18:29:01.318241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.035 [2024-11-26 18:29:01.328551] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:27.035 [2024-11-26 18:29:01.331156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.035 [2024-11-26 18:29:01.331201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:27.036 [2024-11-26 18:29:01.331232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.851 ms 00:27:27.036 [2024-11-26 18:29:01.331243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.036 [2024-11-26 18:29:01.331369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.036 [2024-11-26 18:29:01.331388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:27.036 [2024-11-26 18:29:01.331403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:27.036 [2024-11-26 18:29:01.331413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.036 [2024-11-26 18:29:01.333405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.036 [2024-11-26 18:29:01.333440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:27.036 [2024-11-26 18:29:01.333476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.932 ms 00:27:27.036 [2024-11-26 18:29:01.333486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.036 [2024-11-26 18:29:01.333521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.036 [2024-11-26 18:29:01.333536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:27.036 [2024-11-26 18:29:01.333547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:27.036 [2024-11-26 18:29:01.333556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.036 [2024-11-26 18:29:01.333647] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:27.036 [2024-11-26 18:29:01.333664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.036 [2024-11-26 18:29:01.333675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:27.036 [2024-11-26 18:29:01.333686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:27.036 [2024-11-26 18:29:01.333696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.036 [2024-11-26 18:29:01.360254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.036 [2024-11-26 18:29:01.360305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:27.036 [2024-11-26 18:29:01.360343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.531 ms 00:27:27.036 [2024-11-26 18:29:01.360354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.036 [2024-11-26 18:29:01.360438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:27.036 [2024-11-26 18:29:01.360456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:27.036 [2024-11-26 18:29:01.360467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:27.036 [2024-11-26 18:29:01.360477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:27.036 [2024-11-26 18:29:01.362125] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 317.627 ms, result 0 00:27:28.414  [2024-11-26T18:29:03.812Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-26T18:29:04.749Z] Copying: 41/1024 [MB] (22 MBps) [2024-11-26T18:29:05.685Z] Copying: 63/1024 [MB] (22 MBps) [2024-11-26T18:29:06.622Z] Copying: 86/1024 [MB] (22 MBps) [2024-11-26T18:29:07.996Z] Copying: 108/1024 [MB] (22 MBps) [2024-11-26T18:29:08.562Z] Copying: 131/1024 [MB] (22 MBps) [2024-11-26T18:29:09.939Z] Copying: 154/1024 [MB] (22 MBps) [2024-11-26T18:29:10.876Z] Copying: 176/1024 [MB] (22 MBps) [2024-11-26T18:29:11.814Z] Copying: 199/1024 [MB] (22 MBps) [2024-11-26T18:29:12.748Z] Copying: 221/1024 [MB] (22 MBps) [2024-11-26T18:29:13.684Z] Copying: 244/1024 [MB] (22 MBps) [2024-11-26T18:29:14.620Z] Copying: 267/1024 [MB] (23 MBps) [2024-11-26T18:29:15.995Z] Copying: 291/1024 [MB] (23 MBps) [2024-11-26T18:29:16.562Z] Copying: 314/1024 [MB] (23 MBps) [2024-11-26T18:29:17.938Z] Copying: 339/1024 [MB] (24 MBps) [2024-11-26T18:29:18.875Z] Copying: 362/1024 [MB] (23 MBps) [2024-11-26T18:29:19.811Z] Copying: 385/1024 [MB] (22 MBps) [2024-11-26T18:29:20.747Z] Copying: 408/1024 [MB] (23 MBps) [2024-11-26T18:29:21.683Z] Copying: 432/1024 [MB] (23 MBps) [2024-11-26T18:29:22.619Z] Copying: 455/1024 [MB] (22 MBps) [2024-11-26T18:29:23.996Z] Copying: 477/1024 [MB] (22 MBps) [2024-11-26T18:29:24.563Z] Copying: 502/1024 [MB] (24 MBps) [2024-11-26T18:29:25.941Z] Copying: 524/1024 [MB] (22 MBps) [2024-11-26T18:29:26.889Z] Copying: 545/1024 [MB] (21 MBps) [2024-11-26T18:29:27.824Z] Copying: 569/1024 [MB] (24 MBps) [2024-11-26T18:29:28.759Z] Copying: 592/1024 [MB] (22 MBps) [2024-11-26T18:29:29.699Z] Copying: 614/1024 [MB] (21 MBps) [2024-11-26T18:29:30.635Z] Copying: 637/1024 [MB] (23 MBps) [2024-11-26T18:29:31.571Z] Copying: 660/1024 [MB] (22 MBps) [2024-11-26T18:29:32.945Z] Copying: 682/1024 [MB] (21 MBps) [2024-11-26T18:29:33.880Z] Copying: 704/1024 [MB] (22 MBps) [2024-11-26T18:29:34.814Z] Copying: 725/1024 [MB] (21 MBps) [2024-11-26T18:29:35.749Z] Copying: 747/1024 [MB] (21 MBps) [2024-11-26T18:29:36.685Z] Copying: 770/1024 [MB] (22 MBps) [2024-11-26T18:29:37.622Z] Copying: 792/1024 [MB] (22 MBps) [2024-11-26T18:29:38.999Z] Copying: 815/1024 [MB] (22 MBps) [2024-11-26T18:29:39.573Z] Copying: 837/1024 [MB] (22 MBps) [2024-11-26T18:29:40.951Z] Copying: 859/1024 [MB] (22 MBps) [2024-11-26T18:29:41.892Z] Copying: 880/1024 [MB] (21 MBps) [2024-11-26T18:29:42.826Z] Copying: 902/1024 [MB] (21 MBps) [2024-11-26T18:29:43.763Z] Copying: 924/1024 [MB] (21 MBps) [2024-11-26T18:29:44.697Z] Copying: 945/1024 [MB] (21 MBps) [2024-11-26T18:29:45.632Z] Copying: 967/1024 [MB] (21 MBps) [2024-11-26T18:29:46.566Z] Copying: 988/1024 [MB] (21 MBps) [2024-11-26T18:29:47.500Z] Copying: 1010/1024 [MB] (21 MBps) [2024-11-26T18:29:48.065Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-26 18:29:47.922677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.604 [2024-11-26 18:29:47.923093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:13.604 [2024-11-26 18:29:47.923238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:13.604 [2024-11-26 18:29:47.923290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.604 [2024-11-26 18:29:47.923334] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:13.604 [2024-11-26 18:29:47.927021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.604 [2024-11-26 18:29:47.927050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:13.604 [2024-11-26 18:29:47.927064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.662 ms 00:28:13.604 [2024-11-26 18:29:47.927074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.604 [2024-11-26 18:29:47.927321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.604 [2024-11-26 18:29:47.927354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:13.604 [2024-11-26 18:29:47.927367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:28:13.604 [2024-11-26 18:29:47.927382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.604 [2024-11-26 18:29:47.931994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.604 [2024-11-26 18:29:47.932194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:13.604 [2024-11-26 18:29:47.932308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.592 ms 00:28:13.604 [2024-11-26 18:29:47.932358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.604 [2024-11-26 18:29:47.938142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.604 [2024-11-26 18:29:47.938309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:13.604 [2024-11-26 18:29:47.938416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.622 ms 00:28:13.604 [2024-11-26 18:29:47.938488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.604 [2024-11-26 18:29:47.972197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.604 [2024-11-26 18:29:47.972431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:13.604 [2024-11-26 18:29:47.972586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.588 ms 00:28:13.604 [2024-11-26 18:29:47.972891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.604 [2024-11-26 18:29:47.993143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.604 [2024-11-26 18:29:47.993416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:13.604 [2024-11-26 18:29:47.993550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.022 ms 00:28:13.604 [2024-11-26 18:29:47.993632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.863 [2024-11-26 18:29:48.121766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.863 [2024-11-26 18:29:48.122051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:13.863 [2024-11-26 18:29:48.122174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 127.939 ms 00:28:13.863 [2024-11-26 18:29:48.122225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.863 [2024-11-26 18:29:48.149119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.863 [2024-11-26 18:29:48.149326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:13.864 [2024-11-26 18:29:48.149458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.778 ms 00:28:13.864 [2024-11-26 18:29:48.149506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.864 [2024-11-26 18:29:48.174751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.864 [2024-11-26 18:29:48.174939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:13.864 [2024-11-26 18:29:48.175099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.150 ms 00:28:13.864 [2024-11-26 18:29:48.175147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.864 [2024-11-26 18:29:48.200400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.864 [2024-11-26 18:29:48.200438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:13.864 [2024-11-26 18:29:48.200469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.185 ms 00:28:13.864 [2024-11-26 18:29:48.200479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.864 [2024-11-26 18:29:48.225174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.864 [2024-11-26 18:29:48.225215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:13.864 [2024-11-26 18:29:48.225247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.626 ms 00:28:13.864 [2024-11-26 18:29:48.225257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.864 [2024-11-26 18:29:48.225297] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:13.864 [2024-11-26 18:29:48.225320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:28:13.864 [2024-11-26 18:29:48.225333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.225992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.226017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.226028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.226039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.226049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.226060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.226072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.226083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.226094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:13.864 [2024-11-26 18:29:48.226105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:13.865 [2024-11-26 18:29:48.226468] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:13.865 [2024-11-26 18:29:48.226478] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c808b541-5662-42d3-a4fb-479cccb27fb1 00:28:13.865 [2024-11-26 18:29:48.226490] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:28:13.865 [2024-11-26 18:29:48.226500] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 15296 00:28:13.865 [2024-11-26 18:29:48.226510] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 14336 00:28:13.865 [2024-11-26 18:29:48.226521] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0670 00:28:13.865 [2024-11-26 18:29:48.226538] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:13.865 [2024-11-26 18:29:48.226560] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:13.865 [2024-11-26 18:29:48.226570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:13.865 [2024-11-26 18:29:48.226579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:13.865 [2024-11-26 18:29:48.226588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:13.865 [2024-11-26 18:29:48.226599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.865 [2024-11-26 18:29:48.226646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:13.865 [2024-11-26 18:29:48.226660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:28:13.865 [2024-11-26 18:29:48.226670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.865 [2024-11-26 18:29:48.241392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.865 [2024-11-26 18:29:48.241430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:13.865 [2024-11-26 18:29:48.241452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.670 ms 00:28:13.865 [2024-11-26 18:29:48.241463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.865 [2024-11-26 18:29:48.242032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.865 [2024-11-26 18:29:48.242059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:13.865 [2024-11-26 18:29:48.242072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:28:13.865 [2024-11-26 18:29:48.242082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.865 [2024-11-26 18:29:48.279246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.865 [2024-11-26 18:29:48.279455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:13.865 [2024-11-26 18:29:48.279481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.865 [2024-11-26 18:29:48.279493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.865 [2024-11-26 18:29:48.279601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.865 [2024-11-26 18:29:48.279618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:13.865 [2024-11-26 18:29:48.279629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.865 [2024-11-26 18:29:48.279640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.865 [2024-11-26 18:29:48.279725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.865 [2024-11-26 18:29:48.279744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:13.865 [2024-11-26 18:29:48.279764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.865 [2024-11-26 18:29:48.279774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.865 [2024-11-26 18:29:48.279796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:13.865 [2024-11-26 18:29:48.279832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:13.865 [2024-11-26 18:29:48.279842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:13.865 [2024-11-26 18:29:48.279852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.126 [2024-11-26 18:29:48.366158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.126 [2024-11-26 18:29:48.366242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:14.126 [2024-11-26 18:29:48.366277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.126 [2024-11-26 18:29:48.366288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.126 [2024-11-26 18:29:48.436396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.126 [2024-11-26 18:29:48.436476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:14.126 [2024-11-26 18:29:48.436511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.126 [2024-11-26 18:29:48.436523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.126 [2024-11-26 18:29:48.436680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.126 [2024-11-26 18:29:48.436698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:14.126 [2024-11-26 18:29:48.436711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.126 [2024-11-26 18:29:48.436738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.126 [2024-11-26 18:29:48.436785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.126 [2024-11-26 18:29:48.436801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:14.126 [2024-11-26 18:29:48.436812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.126 [2024-11-26 18:29:48.436822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.126 [2024-11-26 18:29:48.436938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.126 [2024-11-26 18:29:48.436957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:14.126 [2024-11-26 18:29:48.436969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.126 [2024-11-26 18:29:48.436995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.126 [2024-11-26 18:29:48.437056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.126 [2024-11-26 18:29:48.437073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:14.126 [2024-11-26 18:29:48.437084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.126 [2024-11-26 18:29:48.437094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.126 [2024-11-26 18:29:48.437139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.126 [2024-11-26 18:29:48.437152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:14.126 [2024-11-26 18:29:48.437162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.126 [2024-11-26 18:29:48.437173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.126 [2024-11-26 18:29:48.437236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.126 [2024-11-26 18:29:48.437252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:14.126 [2024-11-26 18:29:48.437263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.126 [2024-11-26 18:29:48.437273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.126 [2024-11-26 18:29:48.437431] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 514.716 ms, result 0 00:28:15.065 00:28:15.065 00:28:15.065 18:29:49 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:16.971 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:16.971 18:29:51 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:16.971 18:29:51 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:28:16.971 18:29:51 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:16.971 18:29:51 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:16.971 18:29:51 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:16.971 Process with pid 79248 is not found 00:28:16.971 Remove shared memory files 00:28:16.971 18:29:51 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79248 00:28:16.971 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79248 ']' 00:28:16.971 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79248 00:28:16.971 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79248) - No such process 00:28:16.971 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79248 is not found' 00:28:16.971 18:29:51 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:28:16.971 18:29:51 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:16.971 18:29:51 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:28:16.971 18:29:51 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:28:16.971 18:29:51 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:28:16.971 18:29:51 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:16.971 18:29:51 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:28:16.971 ************************************ 00:28:16.971 END TEST ftl_restore 00:28:16.971 ************************************ 00:28:16.971 00:28:16.971 real 3m38.809s 00:28:16.971 user 3m23.940s 00:28:16.971 sys 0m16.034s 00:28:16.971 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:16.971 18:29:51 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:28:16.971 18:29:51 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:16.971 18:29:51 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:16.971 18:29:51 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:16.971 18:29:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:16.971 ************************************ 00:28:16.971 START TEST ftl_dirty_shutdown 00:28:16.971 ************************************ 00:28:16.971 18:29:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:28:17.230 * Looking for test storage... 00:28:17.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.230 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:17.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.231 --rc genhtml_branch_coverage=1 00:28:17.231 --rc genhtml_function_coverage=1 00:28:17.231 --rc genhtml_legend=1 00:28:17.231 --rc geninfo_all_blocks=1 00:28:17.231 --rc geninfo_unexecuted_blocks=1 00:28:17.231 00:28:17.231 ' 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:17.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.231 --rc genhtml_branch_coverage=1 00:28:17.231 --rc genhtml_function_coverage=1 00:28:17.231 --rc genhtml_legend=1 00:28:17.231 --rc geninfo_all_blocks=1 00:28:17.231 --rc geninfo_unexecuted_blocks=1 00:28:17.231 00:28:17.231 ' 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:17.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.231 --rc genhtml_branch_coverage=1 00:28:17.231 --rc genhtml_function_coverage=1 00:28:17.231 --rc genhtml_legend=1 00:28:17.231 --rc geninfo_all_blocks=1 00:28:17.231 --rc geninfo_unexecuted_blocks=1 00:28:17.231 00:28:17.231 ' 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:17.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.231 --rc genhtml_branch_coverage=1 00:28:17.231 --rc genhtml_function_coverage=1 00:28:17.231 --rc genhtml_legend=1 00:28:17.231 --rc geninfo_all_blocks=1 00:28:17.231 --rc geninfo_unexecuted_blocks=1 00:28:17.231 00:28:17.231 ' 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81488 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81488 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81488 ']' 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:17.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:17.231 18:29:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:17.491 [2024-11-26 18:29:51.700222] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:28:17.491 [2024-11-26 18:29:51.700808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81488 ] 00:28:17.491 [2024-11-26 18:29:51.899411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.750 [2024-11-26 18:29:52.049851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.685 18:29:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:18.685 18:29:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:18.685 18:29:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:18.685 18:29:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:28:18.685 18:29:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:18.685 18:29:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:28:18.685 18:29:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:18.685 18:29:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:18.943 18:29:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:18.943 18:29:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:18.943 18:29:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:18.943 18:29:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:28:18.943 18:29:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:18.943 18:29:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:18.943 18:29:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:18.943 18:29:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:19.202 18:29:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:19.202 { 00:28:19.202 "name": "nvme0n1", 00:28:19.202 "aliases": [ 00:28:19.202 "004b5775-5c71-4b17-b423-471aabef1211" 00:28:19.202 ], 00:28:19.202 "product_name": "NVMe disk", 00:28:19.202 "block_size": 4096, 00:28:19.202 "num_blocks": 1310720, 00:28:19.202 "uuid": "004b5775-5c71-4b17-b423-471aabef1211", 00:28:19.202 "numa_id": -1, 00:28:19.202 "assigned_rate_limits": { 00:28:19.202 "rw_ios_per_sec": 0, 00:28:19.202 "rw_mbytes_per_sec": 0, 00:28:19.202 "r_mbytes_per_sec": 0, 00:28:19.202 "w_mbytes_per_sec": 0 00:28:19.202 }, 00:28:19.202 "claimed": true, 00:28:19.202 "claim_type": "read_many_write_one", 00:28:19.202 "zoned": false, 00:28:19.202 "supported_io_types": { 00:28:19.202 "read": true, 00:28:19.202 "write": true, 00:28:19.202 "unmap": true, 00:28:19.202 "flush": true, 00:28:19.202 "reset": true, 00:28:19.202 "nvme_admin": true, 00:28:19.202 "nvme_io": true, 00:28:19.202 "nvme_io_md": false, 00:28:19.202 "write_zeroes": true, 00:28:19.202 "zcopy": false, 00:28:19.202 "get_zone_info": false, 00:28:19.202 "zone_management": false, 00:28:19.202 "zone_append": false, 00:28:19.202 "compare": true, 00:28:19.202 "compare_and_write": false, 00:28:19.202 "abort": true, 00:28:19.202 "seek_hole": false, 00:28:19.202 "seek_data": false, 00:28:19.202 "copy": true, 00:28:19.202 "nvme_iov_md": false 00:28:19.202 }, 00:28:19.202 "driver_specific": { 00:28:19.202 "nvme": [ 00:28:19.202 { 00:28:19.202 "pci_address": "0000:00:11.0", 00:28:19.202 "trid": { 00:28:19.202 "trtype": "PCIe", 00:28:19.202 "traddr": "0000:00:11.0" 00:28:19.202 }, 00:28:19.202 "ctrlr_data": { 00:28:19.202 "cntlid": 0, 00:28:19.202 "vendor_id": "0x1b36", 00:28:19.202 "model_number": "QEMU NVMe Ctrl", 00:28:19.202 "serial_number": "12341", 00:28:19.202 "firmware_revision": "8.0.0", 00:28:19.202 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:19.202 "oacs": { 00:28:19.202 "security": 0, 00:28:19.202 "format": 1, 00:28:19.202 "firmware": 0, 00:28:19.202 "ns_manage": 1 00:28:19.202 }, 00:28:19.202 "multi_ctrlr": false, 00:28:19.202 "ana_reporting": false 00:28:19.202 }, 00:28:19.202 "vs": { 00:28:19.202 "nvme_version": "1.4" 00:28:19.202 }, 00:28:19.202 "ns_data": { 00:28:19.202 "id": 1, 00:28:19.202 "can_share": false 00:28:19.202 } 00:28:19.202 } 00:28:19.202 ], 00:28:19.202 "mp_policy": "active_passive" 00:28:19.202 } 00:28:19.202 } 00:28:19.202 ]' 00:28:19.202 18:29:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:19.202 18:29:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:19.202 18:29:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:19.202 18:29:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:28:19.202 18:29:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:28:19.202 18:29:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:28:19.202 18:29:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:19.202 18:29:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:19.202 18:29:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:19.202 18:29:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:19.202 18:29:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:19.461 18:29:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=14594a72-31c6-4008-8481-4d20a12ccdec 00:28:19.461 18:29:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:19.461 18:29:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 14594a72-31c6-4008-8481-4d20a12ccdec 00:28:20.028 18:29:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:20.028 18:29:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=5cb6cec6-ec6e-4f33-a6a0-8c49774e855d 00:28:20.028 18:29:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5cb6cec6-ec6e-4f33-a6a0-8c49774e855d 00:28:20.286 18:29:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=fd883aec-d420-42f6-a31f-e8daf36ebd15 00:28:20.286 18:29:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:28:20.286 18:29:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fd883aec-d420-42f6-a31f-e8daf36ebd15 00:28:20.286 18:29:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:28:20.286 18:29:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:20.286 18:29:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=fd883aec-d420-42f6-a31f-e8daf36ebd15 00:28:20.286 18:29:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:28:20.286 18:29:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size fd883aec-d420-42f6-a31f-e8daf36ebd15 00:28:20.286 18:29:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=fd883aec-d420-42f6-a31f-e8daf36ebd15 00:28:20.286 18:29:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:20.286 18:29:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:20.286 18:29:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:20.286 18:29:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fd883aec-d420-42f6-a31f-e8daf36ebd15 00:28:20.546 18:29:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:20.546 { 00:28:20.546 "name": "fd883aec-d420-42f6-a31f-e8daf36ebd15", 00:28:20.546 "aliases": [ 00:28:20.546 "lvs/nvme0n1p0" 00:28:20.546 ], 00:28:20.546 "product_name": "Logical Volume", 00:28:20.546 "block_size": 4096, 00:28:20.546 "num_blocks": 26476544, 00:28:20.546 "uuid": "fd883aec-d420-42f6-a31f-e8daf36ebd15", 00:28:20.546 "assigned_rate_limits": { 00:28:20.546 "rw_ios_per_sec": 0, 00:28:20.546 "rw_mbytes_per_sec": 0, 00:28:20.546 "r_mbytes_per_sec": 0, 00:28:20.546 "w_mbytes_per_sec": 0 00:28:20.546 }, 00:28:20.546 "claimed": false, 00:28:20.546 "zoned": false, 00:28:20.546 "supported_io_types": { 00:28:20.546 "read": true, 00:28:20.546 "write": true, 00:28:20.546 "unmap": true, 00:28:20.546 "flush": false, 00:28:20.546 "reset": true, 00:28:20.546 "nvme_admin": false, 00:28:20.546 "nvme_io": false, 00:28:20.546 "nvme_io_md": false, 00:28:20.546 "write_zeroes": true, 00:28:20.546 "zcopy": false, 00:28:20.546 "get_zone_info": false, 00:28:20.546 "zone_management": false, 00:28:20.546 "zone_append": false, 00:28:20.546 "compare": false, 00:28:20.546 "compare_and_write": false, 00:28:20.546 "abort": false, 00:28:20.546 "seek_hole": true, 00:28:20.546 "seek_data": true, 00:28:20.546 "copy": false, 00:28:20.546 "nvme_iov_md": false 00:28:20.546 }, 00:28:20.546 "driver_specific": { 00:28:20.546 "lvol": { 00:28:20.546 "lvol_store_uuid": "5cb6cec6-ec6e-4f33-a6a0-8c49774e855d", 00:28:20.546 "base_bdev": "nvme0n1", 00:28:20.546 "thin_provision": true, 00:28:20.546 "num_allocated_clusters": 0, 00:28:20.546 "snapshot": false, 00:28:20.546 "clone": false, 00:28:20.546 "esnap_clone": false 00:28:20.546 } 00:28:20.546 } 00:28:20.546 } 00:28:20.546 ]' 00:28:20.546 18:29:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:20.546 18:29:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:20.546 18:29:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:20.805 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:20.805 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:20.805 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:20.805 18:29:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:28:20.805 18:29:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:20.805 18:29:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:21.065 18:29:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:21.065 18:29:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:21.065 18:29:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size fd883aec-d420-42f6-a31f-e8daf36ebd15 00:28:21.065 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=fd883aec-d420-42f6-a31f-e8daf36ebd15 00:28:21.065 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:21.065 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:21.065 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:21.065 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fd883aec-d420-42f6-a31f-e8daf36ebd15 00:28:21.324 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:21.324 { 00:28:21.324 "name": "fd883aec-d420-42f6-a31f-e8daf36ebd15", 00:28:21.324 "aliases": [ 00:28:21.324 "lvs/nvme0n1p0" 00:28:21.324 ], 00:28:21.324 "product_name": "Logical Volume", 00:28:21.324 "block_size": 4096, 00:28:21.324 "num_blocks": 26476544, 00:28:21.324 "uuid": "fd883aec-d420-42f6-a31f-e8daf36ebd15", 00:28:21.324 "assigned_rate_limits": { 00:28:21.324 "rw_ios_per_sec": 0, 00:28:21.324 "rw_mbytes_per_sec": 0, 00:28:21.324 "r_mbytes_per_sec": 0, 00:28:21.324 "w_mbytes_per_sec": 0 00:28:21.324 }, 00:28:21.324 "claimed": false, 00:28:21.324 "zoned": false, 00:28:21.324 "supported_io_types": { 00:28:21.324 "read": true, 00:28:21.324 "write": true, 00:28:21.324 "unmap": true, 00:28:21.324 "flush": false, 00:28:21.324 "reset": true, 00:28:21.324 "nvme_admin": false, 00:28:21.324 "nvme_io": false, 00:28:21.324 "nvme_io_md": false, 00:28:21.324 "write_zeroes": true, 00:28:21.324 "zcopy": false, 00:28:21.324 "get_zone_info": false, 00:28:21.324 "zone_management": false, 00:28:21.324 "zone_append": false, 00:28:21.324 "compare": false, 00:28:21.324 "compare_and_write": false, 00:28:21.324 "abort": false, 00:28:21.324 "seek_hole": true, 00:28:21.324 "seek_data": true, 00:28:21.324 "copy": false, 00:28:21.324 "nvme_iov_md": false 00:28:21.324 }, 00:28:21.324 "driver_specific": { 00:28:21.324 "lvol": { 00:28:21.324 "lvol_store_uuid": "5cb6cec6-ec6e-4f33-a6a0-8c49774e855d", 00:28:21.324 "base_bdev": "nvme0n1", 00:28:21.324 "thin_provision": true, 00:28:21.324 "num_allocated_clusters": 0, 00:28:21.324 "snapshot": false, 00:28:21.324 "clone": false, 00:28:21.324 "esnap_clone": false 00:28:21.324 } 00:28:21.324 } 00:28:21.324 } 00:28:21.324 ]' 00:28:21.324 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:21.324 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:21.324 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:21.324 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:21.324 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:21.324 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:21.324 18:29:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:28:21.324 18:29:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:21.582 18:29:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:28:21.582 18:29:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size fd883aec-d420-42f6-a31f-e8daf36ebd15 00:28:21.582 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=fd883aec-d420-42f6-a31f-e8daf36ebd15 00:28:21.582 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:28:21.582 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:28:21.582 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:28:21.583 18:29:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fd883aec-d420-42f6-a31f-e8daf36ebd15 00:28:21.841 18:29:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:28:21.841 { 00:28:21.841 "name": "fd883aec-d420-42f6-a31f-e8daf36ebd15", 00:28:21.841 "aliases": [ 00:28:21.841 "lvs/nvme0n1p0" 00:28:21.841 ], 00:28:21.841 "product_name": "Logical Volume", 00:28:21.841 "block_size": 4096, 00:28:21.841 "num_blocks": 26476544, 00:28:21.841 "uuid": "fd883aec-d420-42f6-a31f-e8daf36ebd15", 00:28:21.841 "assigned_rate_limits": { 00:28:21.841 "rw_ios_per_sec": 0, 00:28:21.841 "rw_mbytes_per_sec": 0, 00:28:21.841 "r_mbytes_per_sec": 0, 00:28:21.841 "w_mbytes_per_sec": 0 00:28:21.841 }, 00:28:21.841 "claimed": false, 00:28:21.841 "zoned": false, 00:28:21.841 "supported_io_types": { 00:28:21.841 "read": true, 00:28:21.841 "write": true, 00:28:21.841 "unmap": true, 00:28:21.841 "flush": false, 00:28:21.841 "reset": true, 00:28:21.841 "nvme_admin": false, 00:28:21.841 "nvme_io": false, 00:28:21.841 "nvme_io_md": false, 00:28:21.841 "write_zeroes": true, 00:28:21.841 "zcopy": false, 00:28:21.841 "get_zone_info": false, 00:28:21.841 "zone_management": false, 00:28:21.841 "zone_append": false, 00:28:21.841 "compare": false, 00:28:21.841 "compare_and_write": false, 00:28:21.841 "abort": false, 00:28:21.841 "seek_hole": true, 00:28:21.841 "seek_data": true, 00:28:21.841 "copy": false, 00:28:21.841 "nvme_iov_md": false 00:28:21.841 }, 00:28:21.841 "driver_specific": { 00:28:21.841 "lvol": { 00:28:21.841 "lvol_store_uuid": "5cb6cec6-ec6e-4f33-a6a0-8c49774e855d", 00:28:21.841 "base_bdev": "nvme0n1", 00:28:21.841 "thin_provision": true, 00:28:21.841 "num_allocated_clusters": 0, 00:28:21.841 "snapshot": false, 00:28:21.841 "clone": false, 00:28:21.841 "esnap_clone": false 00:28:21.841 } 00:28:21.841 } 00:28:21.841 } 00:28:21.841 ]' 00:28:21.841 18:29:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:28:21.841 18:29:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:28:21.841 18:29:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:28:21.841 18:29:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:28:21.841 18:29:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:28:21.841 18:29:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:28:21.841 18:29:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:28:21.841 18:29:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d fd883aec-d420-42f6-a31f-e8daf36ebd15 --l2p_dram_limit 10' 00:28:21.841 18:29:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:28:21.841 18:29:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:28:21.841 18:29:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:21.841 18:29:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fd883aec-d420-42f6-a31f-e8daf36ebd15 --l2p_dram_limit 10 -c nvc0n1p0 00:28:22.100 [2024-11-26 18:29:56.533979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.100 [2024-11-26 18:29:56.534240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:22.100 [2024-11-26 18:29:56.534286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:22.100 [2024-11-26 18:29:56.534304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.100 [2024-11-26 18:29:56.534414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.100 [2024-11-26 18:29:56.534438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:22.100 [2024-11-26 18:29:56.534456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:28:22.100 [2024-11-26 18:29:56.534470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.100 [2024-11-26 18:29:56.534510] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:22.100 [2024-11-26 18:29:56.535520] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:22.100 [2024-11-26 18:29:56.535556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.100 [2024-11-26 18:29:56.535584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:22.100 [2024-11-26 18:29:56.535605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:28:22.100 [2024-11-26 18:29:56.535619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.100 [2024-11-26 18:29:56.535771] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 77389958-60a0-4bcf-8661-63f2fd98ce2c 00:28:22.100 [2024-11-26 18:29:56.537706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.100 [2024-11-26 18:29:56.537753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:22.100 [2024-11-26 18:29:56.537773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:28:22.100 [2024-11-26 18:29:56.537788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.100 [2024-11-26 18:29:56.547826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.100 [2024-11-26 18:29:56.547903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:22.100 [2024-11-26 18:29:56.547923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.956 ms 00:28:22.100 [2024-11-26 18:29:56.547939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.100 [2024-11-26 18:29:56.548066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.100 [2024-11-26 18:29:56.548093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:22.100 [2024-11-26 18:29:56.548108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:28:22.100 [2024-11-26 18:29:56.548130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.100 [2024-11-26 18:29:56.548211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.100 [2024-11-26 18:29:56.548237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:22.101 [2024-11-26 18:29:56.548256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:22.101 [2024-11-26 18:29:56.548272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.101 [2024-11-26 18:29:56.548309] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:22.101 [2024-11-26 18:29:56.553064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.101 [2024-11-26 18:29:56.553247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:22.101 [2024-11-26 18:29:56.553284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.759 ms 00:28:22.101 [2024-11-26 18:29:56.553299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.101 [2024-11-26 18:29:56.553352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.101 [2024-11-26 18:29:56.553371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:22.101 [2024-11-26 18:29:56.553388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:22.101 [2024-11-26 18:29:56.553400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.101 [2024-11-26 18:29:56.553452] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:22.101 [2024-11-26 18:29:56.553662] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:22.101 [2024-11-26 18:29:56.553695] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:22.101 [2024-11-26 18:29:56.553713] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:22.101 [2024-11-26 18:29:56.553733] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:22.101 [2024-11-26 18:29:56.553764] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:22.101 [2024-11-26 18:29:56.553779] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:22.101 [2024-11-26 18:29:56.553795] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:22.101 [2024-11-26 18:29:56.553809] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:22.101 [2024-11-26 18:29:56.553821] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:22.101 [2024-11-26 18:29:56.553837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.101 [2024-11-26 18:29:56.553862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:22.101 [2024-11-26 18:29:56.553882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:28:22.101 [2024-11-26 18:29:56.553910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.101 [2024-11-26 18:29:56.554013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.101 [2024-11-26 18:29:56.554030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:22.101 [2024-11-26 18:29:56.554045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:22.101 [2024-11-26 18:29:56.554057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.101 [2024-11-26 18:29:56.554171] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:22.101 [2024-11-26 18:29:56.554191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:22.101 [2024-11-26 18:29:56.554208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:22.101 [2024-11-26 18:29:56.554220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.101 [2024-11-26 18:29:56.554235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:22.101 [2024-11-26 18:29:56.554246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:22.101 [2024-11-26 18:29:56.554260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:22.101 [2024-11-26 18:29:56.554272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:22.101 [2024-11-26 18:29:56.554285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:22.101 [2024-11-26 18:29:56.554296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:22.101 [2024-11-26 18:29:56.554310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:22.101 [2024-11-26 18:29:56.554321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:22.101 [2024-11-26 18:29:56.554336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:22.101 [2024-11-26 18:29:56.554348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:22.101 [2024-11-26 18:29:56.554362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:22.101 [2024-11-26 18:29:56.554373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.101 [2024-11-26 18:29:56.554391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:22.101 [2024-11-26 18:29:56.554403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:22.101 [2024-11-26 18:29:56.554417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.101 [2024-11-26 18:29:56.554429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:22.101 [2024-11-26 18:29:56.554443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:22.101 [2024-11-26 18:29:56.554456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:22.101 [2024-11-26 18:29:56.554471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:22.101 [2024-11-26 18:29:56.554482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:22.101 [2024-11-26 18:29:56.554496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:22.101 [2024-11-26 18:29:56.554508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:22.101 [2024-11-26 18:29:56.554521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:22.101 [2024-11-26 18:29:56.554533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:22.101 [2024-11-26 18:29:56.554547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:22.101 [2024-11-26 18:29:56.554558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:22.101 [2024-11-26 18:29:56.554589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:22.101 [2024-11-26 18:29:56.554602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:22.101 [2024-11-26 18:29:56.554665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:22.101 [2024-11-26 18:29:56.554682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:22.101 [2024-11-26 18:29:56.554697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:22.101 [2024-11-26 18:29:56.554711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:22.101 [2024-11-26 18:29:56.554727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:22.101 [2024-11-26 18:29:56.554740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:22.101 [2024-11-26 18:29:56.554755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:22.101 [2024-11-26 18:29:56.554768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.101 [2024-11-26 18:29:56.554783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:22.101 [2024-11-26 18:29:56.554796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:22.101 [2024-11-26 18:29:56.554812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.101 [2024-11-26 18:29:56.554824] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:22.101 [2024-11-26 18:29:56.554843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:22.101 [2024-11-26 18:29:56.554857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:22.101 [2024-11-26 18:29:56.554874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:22.101 [2024-11-26 18:29:56.554889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:22.101 [2024-11-26 18:29:56.554907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:22.101 [2024-11-26 18:29:56.554936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:22.101 [2024-11-26 18:29:56.554969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:22.101 [2024-11-26 18:29:56.554981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:22.101 [2024-11-26 18:29:56.554996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:22.101 [2024-11-26 18:29:56.555016] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:22.101 [2024-11-26 18:29:56.555042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:22.101 [2024-11-26 18:29:56.555056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:22.101 [2024-11-26 18:29:56.555070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:22.101 [2024-11-26 18:29:56.555092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:22.101 [2024-11-26 18:29:56.555106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:22.101 [2024-11-26 18:29:56.555117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:22.101 [2024-11-26 18:29:56.555131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:22.101 [2024-11-26 18:29:56.555142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:22.101 [2024-11-26 18:29:56.555156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:22.101 [2024-11-26 18:29:56.555167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:22.101 [2024-11-26 18:29:56.555184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:22.101 [2024-11-26 18:29:56.555195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:22.101 [2024-11-26 18:29:56.555215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:22.101 [2024-11-26 18:29:56.555227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:22.101 [2024-11-26 18:29:56.555242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:22.101 [2024-11-26 18:29:56.555254] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:22.101 [2024-11-26 18:29:56.555269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:22.102 [2024-11-26 18:29:56.555282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:22.102 [2024-11-26 18:29:56.555297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:22.102 [2024-11-26 18:29:56.555316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:22.102 [2024-11-26 18:29:56.555332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:22.102 [2024-11-26 18:29:56.555345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:22.102 [2024-11-26 18:29:56.555360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:22.102 [2024-11-26 18:29:56.555373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:28:22.102 [2024-11-26 18:29:56.555387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.102 [2024-11-26 18:29:56.555444] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:22.102 [2024-11-26 18:29:56.555468] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:28:25.397 [2024-11-26 18:29:59.649991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.397 [2024-11-26 18:29:59.650380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:28:25.397 [2024-11-26 18:29:59.650524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3094.562 ms 00:28:25.397 [2024-11-26 18:29:59.650740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.397 [2024-11-26 18:29:59.685487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.397 [2024-11-26 18:29:59.685821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:25.397 [2024-11-26 18:29:59.685976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.391 ms 00:28:25.397 [2024-11-26 18:29:59.686037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.397 [2024-11-26 18:29:59.686333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.397 [2024-11-26 18:29:59.686508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:25.397 [2024-11-26 18:29:59.686667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:25.397 [2024-11-26 18:29:59.686743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.397 [2024-11-26 18:29:59.724896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.397 [2024-11-26 18:29:59.725127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:25.397 [2024-11-26 18:29:59.725257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.980 ms 00:28:25.397 [2024-11-26 18:29:59.725316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.397 [2024-11-26 18:29:59.725601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.397 [2024-11-26 18:29:59.725669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:25.397 [2024-11-26 18:29:59.725859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:25.397 [2024-11-26 18:29:59.725932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.397 [2024-11-26 18:29:59.726813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.397 [2024-11-26 18:29:59.726966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:25.397 [2024-11-26 18:29:59.727079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:28:25.397 [2024-11-26 18:29:59.727202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.397 [2024-11-26 18:29:59.727394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.397 [2024-11-26 18:29:59.727457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:25.397 [2024-11-26 18:29:59.727571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:28:25.397 [2024-11-26 18:29:59.727637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.397 [2024-11-26 18:29:59.746905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.397 [2024-11-26 18:29:59.747082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:25.397 [2024-11-26 18:29:59.747280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.048 ms 00:28:25.397 [2024-11-26 18:29:59.747341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.397 [2024-11-26 18:29:59.767683] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:25.397 [2024-11-26 18:29:59.771738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.397 [2024-11-26 18:29:59.771777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:25.397 [2024-11-26 18:29:59.771800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.253 ms 00:28:25.397 [2024-11-26 18:29:59.771812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.397 [2024-11-26 18:29:59.847874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.397 [2024-11-26 18:29:59.847968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:28:25.397 [2024-11-26 18:29:59.847998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.018 ms 00:28:25.397 [2024-11-26 18:29:59.848012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.397 [2024-11-26 18:29:59.848230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.397 [2024-11-26 18:29:59.848251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:25.397 [2024-11-26 18:29:59.848273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:28:25.397 [2024-11-26 18:29:59.848286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.655 [2024-11-26 18:29:59.873278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.655 [2024-11-26 18:29:59.873323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:28:25.655 [2024-11-26 18:29:59.873347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.923 ms 00:28:25.655 [2024-11-26 18:29:59.873360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.655 [2024-11-26 18:29:59.897689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.655 [2024-11-26 18:29:59.897733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:28:25.655 [2024-11-26 18:29:59.897773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.272 ms 00:28:25.655 [2024-11-26 18:29:59.897786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.655 [2024-11-26 18:29:59.898588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.655 [2024-11-26 18:29:59.898674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:25.655 [2024-11-26 18:29:59.898702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:28:25.655 [2024-11-26 18:29:59.898715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.655 [2024-11-26 18:29:59.972905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.655 [2024-11-26 18:29:59.972972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:28:25.655 [2024-11-26 18:29:59.972999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.132 ms 00:28:25.655 [2024-11-26 18:29:59.973014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.655 [2024-11-26 18:29:59.999586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.655 [2024-11-26 18:29:59.999631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:28:25.656 [2024-11-26 18:29:59.999654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.471 ms 00:28:25.656 [2024-11-26 18:29:59.999668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.656 [2024-11-26 18:30:00.028249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.656 [2024-11-26 18:30:00.028296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:28:25.656 [2024-11-26 18:30:00.028336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.528 ms 00:28:25.656 [2024-11-26 18:30:00.028351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.656 [2024-11-26 18:30:00.056041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.656 [2024-11-26 18:30:00.056087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:25.656 [2024-11-26 18:30:00.056127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.632 ms 00:28:25.656 [2024-11-26 18:30:00.056141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.656 [2024-11-26 18:30:00.056203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.656 [2024-11-26 18:30:00.056223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:25.656 [2024-11-26 18:30:00.056244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:25.656 [2024-11-26 18:30:00.056257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.656 [2024-11-26 18:30:00.056371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.656 [2024-11-26 18:30:00.056395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:25.656 [2024-11-26 18:30:00.056413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:25.656 [2024-11-26 18:30:00.056426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.656 [2024-11-26 18:30:00.058034] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3523.427 ms, result 0 00:28:25.656 { 00:28:25.656 "name": "ftl0", 00:28:25.656 "uuid": "77389958-60a0-4bcf-8661-63f2fd98ce2c" 00:28:25.656 } 00:28:25.656 18:30:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:28:25.656 18:30:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:28:25.914 18:30:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:28:25.914 18:30:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:28:26.172 18:30:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:28:26.172 /dev/nbd0 00:28:26.172 18:30:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:28:26.172 18:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:28:26.172 18:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:28:26.172 18:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:28:26.172 18:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:28:26.172 18:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:28:26.172 18:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:28:26.172 18:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:28:26.172 18:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:28:26.172 18:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:28:26.172 1+0 records in 00:28:26.172 1+0 records out 00:28:26.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066122 s, 6.2 MB/s 00:28:26.172 18:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:26.431 18:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:28:26.431 18:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:28:26.431 18:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:28:26.431 18:30:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:28:26.431 18:30:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:28:26.431 [2024-11-26 18:30:00.716458] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:28:26.431 [2024-11-26 18:30:00.716655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81636 ] 00:28:26.431 [2024-11-26 18:30:00.883633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.690 [2024-11-26 18:30:00.989622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:28.065  [2024-11-26T18:30:03.460Z] Copying: 192/1024 [MB] (192 MBps) [2024-11-26T18:30:04.409Z] Copying: 377/1024 [MB] (185 MBps) [2024-11-26T18:30:05.345Z] Copying: 567/1024 [MB] (189 MBps) [2024-11-26T18:30:06.722Z] Copying: 744/1024 [MB] (176 MBps) [2024-11-26T18:30:06.980Z] Copying: 917/1024 [MB] (173 MBps) [2024-11-26T18:30:07.915Z] Copying: 1024/1024 [MB] (average 182 MBps) 00:28:33.454 00:28:33.454 18:30:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:35.357 18:30:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:28:35.357 [2024-11-26 18:30:09.797113] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:28:35.357 [2024-11-26 18:30:09.797276] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81729 ] 00:28:35.616 [2024-11-26 18:30:09.973711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.874 [2024-11-26 18:30:10.121438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.251  [2024-11-26T18:30:12.647Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-26T18:30:13.581Z] Copying: 27/1024 [MB] (12 MBps) [2024-11-26T18:30:14.516Z] Copying: 40/1024 [MB] (12 MBps) [2024-11-26T18:30:15.449Z] Copying: 52/1024 [MB] (12 MBps) [2024-11-26T18:30:16.824Z] Copying: 66/1024 [MB] (13 MBps) [2024-11-26T18:30:17.758Z] Copying: 80/1024 [MB] (14 MBps) [2024-11-26T18:30:18.692Z] Copying: 95/1024 [MB] (14 MBps) [2024-11-26T18:30:19.625Z] Copying: 110/1024 [MB] (14 MBps) [2024-11-26T18:30:20.559Z] Copying: 125/1024 [MB] (15 MBps) [2024-11-26T18:30:21.492Z] Copying: 140/1024 [MB] (14 MBps) [2024-11-26T18:30:22.442Z] Copying: 155/1024 [MB] (14 MBps) [2024-11-26T18:30:23.408Z] Copying: 169/1024 [MB] (14 MBps) [2024-11-26T18:30:24.782Z] Copying: 184/1024 [MB] (14 MBps) [2024-11-26T18:30:25.716Z] Copying: 200/1024 [MB] (15 MBps) [2024-11-26T18:30:26.651Z] Copying: 215/1024 [MB] (15 MBps) [2024-11-26T18:30:27.586Z] Copying: 230/1024 [MB] (15 MBps) [2024-11-26T18:30:28.519Z] Copying: 246/1024 [MB] (15 MBps) [2024-11-26T18:30:29.452Z] Copying: 261/1024 [MB] (15 MBps) [2024-11-26T18:30:30.824Z] Copying: 277/1024 [MB] (15 MBps) [2024-11-26T18:30:31.755Z] Copying: 291/1024 [MB] (14 MBps) [2024-11-26T18:30:32.689Z] Copying: 306/1024 [MB] (14 MBps) [2024-11-26T18:30:33.622Z] Copying: 321/1024 [MB] (14 MBps) [2024-11-26T18:30:34.556Z] Copying: 336/1024 [MB] (14 MBps) [2024-11-26T18:30:35.490Z] Copying: 350/1024 [MB] (14 MBps) [2024-11-26T18:30:36.426Z] Copying: 366/1024 [MB] (15 MBps) [2024-11-26T18:30:37.798Z] Copying: 380/1024 [MB] (14 MBps) [2024-11-26T18:30:38.746Z] Copying: 395/1024 [MB] (14 MBps) [2024-11-26T18:30:39.681Z] Copying: 409/1024 [MB] (14 MBps) [2024-11-26T18:30:40.615Z] Copying: 424/1024 [MB] (14 MBps) [2024-11-26T18:30:41.548Z] Copying: 439/1024 [MB] (14 MBps) [2024-11-26T18:30:42.483Z] Copying: 454/1024 [MB] (14 MBps) [2024-11-26T18:30:43.415Z] Copying: 468/1024 [MB] (14 MBps) [2024-11-26T18:30:44.791Z] Copying: 483/1024 [MB] (14 MBps) [2024-11-26T18:30:45.728Z] Copying: 498/1024 [MB] (14 MBps) [2024-11-26T18:30:46.667Z] Copying: 513/1024 [MB] (15 MBps) [2024-11-26T18:30:47.604Z] Copying: 528/1024 [MB] (15 MBps) [2024-11-26T18:30:48.535Z] Copying: 543/1024 [MB] (15 MBps) [2024-11-26T18:30:49.470Z] Copying: 558/1024 [MB] (14 MBps) [2024-11-26T18:30:50.405Z] Copying: 573/1024 [MB] (14 MBps) [2024-11-26T18:30:51.778Z] Copying: 588/1024 [MB] (15 MBps) [2024-11-26T18:30:52.768Z] Copying: 603/1024 [MB] (14 MBps) [2024-11-26T18:30:53.702Z] Copying: 618/1024 [MB] (14 MBps) [2024-11-26T18:30:54.634Z] Copying: 632/1024 [MB] (14 MBps) [2024-11-26T18:30:55.575Z] Copying: 648/1024 [MB] (15 MBps) [2024-11-26T18:30:56.508Z] Copying: 663/1024 [MB] (14 MBps) [2024-11-26T18:30:57.442Z] Copying: 678/1024 [MB] (15 MBps) [2024-11-26T18:30:58.816Z] Copying: 693/1024 [MB] (15 MBps) [2024-11-26T18:30:59.755Z] Copying: 708/1024 [MB] (15 MBps) [2024-11-26T18:31:00.691Z] Copying: 724/1024 [MB] (15 MBps) [2024-11-26T18:31:01.625Z] Copying: 738/1024 [MB] (14 MBps) [2024-11-26T18:31:02.561Z] Copying: 753/1024 [MB] (14 MBps) [2024-11-26T18:31:03.494Z] Copying: 769/1024 [MB] (15 MBps) [2024-11-26T18:31:04.426Z] Copying: 784/1024 [MB] (15 MBps) [2024-11-26T18:31:05.801Z] Copying: 799/1024 [MB] (15 MBps) [2024-11-26T18:31:06.737Z] Copying: 814/1024 [MB] (15 MBps) [2024-11-26T18:31:07.669Z] Copying: 829/1024 [MB] (14 MBps) [2024-11-26T18:31:08.606Z] Copying: 844/1024 [MB] (14 MBps) [2024-11-26T18:31:09.540Z] Copying: 859/1024 [MB] (15 MBps) [2024-11-26T18:31:10.473Z] Copying: 874/1024 [MB] (15 MBps) [2024-11-26T18:31:11.408Z] Copying: 890/1024 [MB] (15 MBps) [2024-11-26T18:31:12.778Z] Copying: 905/1024 [MB] (15 MBps) [2024-11-26T18:31:13.709Z] Copying: 921/1024 [MB] (15 MBps) [2024-11-26T18:31:14.643Z] Copying: 936/1024 [MB] (15 MBps) [2024-11-26T18:31:15.579Z] Copying: 951/1024 [MB] (15 MBps) [2024-11-26T18:31:16.512Z] Copying: 966/1024 [MB] (15 MBps) [2024-11-26T18:31:17.447Z] Copying: 981/1024 [MB] (15 MBps) [2024-11-26T18:31:18.478Z] Copying: 997/1024 [MB] (15 MBps) [2024-11-26T18:31:19.424Z] Copying: 1012/1024 [MB] (15 MBps) [2024-11-26T18:31:20.362Z] Copying: 1024/1024 [MB] (average 14 MBps) 00:29:45.901 00:29:45.901 18:31:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:29:45.901 18:31:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:29:46.159 18:31:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:46.419 [2024-11-26 18:31:20.661620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.419 [2024-11-26 18:31:20.661674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:46.419 [2024-11-26 18:31:20.661694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:46.419 [2024-11-26 18:31:20.661710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.419 [2024-11-26 18:31:20.661743] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:46.419 [2024-11-26 18:31:20.665196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.419 [2024-11-26 18:31:20.665225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:46.419 [2024-11-26 18:31:20.665242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.428 ms 00:29:46.419 [2024-11-26 18:31:20.665252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.419 [2024-11-26 18:31:20.667453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.419 [2024-11-26 18:31:20.667702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:46.419 [2024-11-26 18:31:20.667733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.162 ms 00:29:46.419 [2024-11-26 18:31:20.667745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.419 [2024-11-26 18:31:20.683835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.419 [2024-11-26 18:31:20.683875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:46.419 [2024-11-26 18:31:20.683910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.051 ms 00:29:46.419 [2024-11-26 18:31:20.683922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.419 [2024-11-26 18:31:20.689261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.419 [2024-11-26 18:31:20.689292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:46.419 [2024-11-26 18:31:20.689307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.295 ms 00:29:46.419 [2024-11-26 18:31:20.689325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.419 [2024-11-26 18:31:20.714644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.419 [2024-11-26 18:31:20.714698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:46.419 [2024-11-26 18:31:20.714734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.251 ms 00:29:46.419 [2024-11-26 18:31:20.714745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.419 [2024-11-26 18:31:20.732803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.419 [2024-11-26 18:31:20.732849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:46.419 [2024-11-26 18:31:20.732870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.000 ms 00:29:46.419 [2024-11-26 18:31:20.732880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.419 [2024-11-26 18:31:20.733027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.419 [2024-11-26 18:31:20.733050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:46.420 [2024-11-26 18:31:20.733063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:29:46.420 [2024-11-26 18:31:20.733073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.420 [2024-11-26 18:31:20.759181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.420 [2024-11-26 18:31:20.759373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:46.420 [2024-11-26 18:31:20.759500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.079 ms 00:29:46.420 [2024-11-26 18:31:20.759687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.420 [2024-11-26 18:31:20.787005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.420 [2024-11-26 18:31:20.787190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:46.420 [2024-11-26 18:31:20.787320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.225 ms 00:29:46.420 [2024-11-26 18:31:20.787341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.420 [2024-11-26 18:31:20.811992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.420 [2024-11-26 18:31:20.812031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:46.420 [2024-11-26 18:31:20.812064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.599 ms 00:29:46.420 [2024-11-26 18:31:20.812074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.420 [2024-11-26 18:31:20.836420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.420 [2024-11-26 18:31:20.836458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:46.420 [2024-11-26 18:31:20.836493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.245 ms 00:29:46.420 [2024-11-26 18:31:20.836503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.420 [2024-11-26 18:31:20.836547] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:46.420 [2024-11-26 18:31:20.836607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.836996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:46.420 [2024-11-26 18:31:20.837573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:46.421 [2024-11-26 18:31:20.837949] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:46.421 [2024-11-26 18:31:20.837962] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 77389958-60a0-4bcf-8661-63f2fd98ce2c 00:29:46.421 [2024-11-26 18:31:20.837978] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:46.421 [2024-11-26 18:31:20.837993] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:46.421 [2024-11-26 18:31:20.838005] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:46.421 [2024-11-26 18:31:20.838018] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:46.421 [2024-11-26 18:31:20.838028] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:46.421 [2024-11-26 18:31:20.838041] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:46.421 [2024-11-26 18:31:20.838051] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:46.421 [2024-11-26 18:31:20.838062] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:46.421 [2024-11-26 18:31:20.838072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:46.421 [2024-11-26 18:31:20.838084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.421 [2024-11-26 18:31:20.838095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:46.421 [2024-11-26 18:31:20.838109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.540 ms 00:29:46.421 [2024-11-26 18:31:20.838120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.421 [2024-11-26 18:31:20.852842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.421 [2024-11-26 18:31:20.852876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:46.421 [2024-11-26 18:31:20.852899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.674 ms 00:29:46.421 [2024-11-26 18:31:20.852940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.421 [2024-11-26 18:31:20.853447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:46.421 [2024-11-26 18:31:20.853469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:46.421 [2024-11-26 18:31:20.853486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:29:46.421 [2024-11-26 18:31:20.853496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.680 [2024-11-26 18:31:20.900585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.680 [2024-11-26 18:31:20.900800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:46.680 [2024-11-26 18:31:20.900830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.680 [2024-11-26 18:31:20.900843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.680 [2024-11-26 18:31:20.900913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.680 [2024-11-26 18:31:20.900927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:46.680 [2024-11-26 18:31:20.900941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.680 [2024-11-26 18:31:20.900951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.680 [2024-11-26 18:31:20.901085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.680 [2024-11-26 18:31:20.901122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:46.680 [2024-11-26 18:31:20.901136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.680 [2024-11-26 18:31:20.901146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.680 [2024-11-26 18:31:20.901176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.680 [2024-11-26 18:31:20.901189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:46.680 [2024-11-26 18:31:20.901202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.680 [2024-11-26 18:31:20.901212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.680 [2024-11-26 18:31:20.987644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.680 [2024-11-26 18:31:20.987704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:46.680 [2024-11-26 18:31:20.987740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.680 [2024-11-26 18:31:20.987751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.680 [2024-11-26 18:31:21.057463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.680 [2024-11-26 18:31:21.057533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:46.680 [2024-11-26 18:31:21.057552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.680 [2024-11-26 18:31:21.057563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.680 [2024-11-26 18:31:21.057736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.680 [2024-11-26 18:31:21.057761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:46.680 [2024-11-26 18:31:21.057789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.680 [2024-11-26 18:31:21.057801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.680 [2024-11-26 18:31:21.057909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.680 [2024-11-26 18:31:21.057925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:46.680 [2024-11-26 18:31:21.057939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.680 [2024-11-26 18:31:21.057949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.680 [2024-11-26 18:31:21.058136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.680 [2024-11-26 18:31:21.058162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:46.680 [2024-11-26 18:31:21.058185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.680 [2024-11-26 18:31:21.058209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.680 [2024-11-26 18:31:21.058264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.680 [2024-11-26 18:31:21.058286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:46.680 [2024-11-26 18:31:21.058301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.681 [2024-11-26 18:31:21.058311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.681 [2024-11-26 18:31:21.058364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.681 [2024-11-26 18:31:21.058378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:46.681 [2024-11-26 18:31:21.058391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.681 [2024-11-26 18:31:21.058405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.681 [2024-11-26 18:31:21.058484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:46.681 [2024-11-26 18:31:21.058503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:46.681 [2024-11-26 18:31:21.058526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:46.681 [2024-11-26 18:31:21.058537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:46.681 [2024-11-26 18:31:21.058747] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 397.059 ms, result 0 00:29:46.681 true 00:29:46.681 18:31:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81488 00:29:46.681 18:31:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81488 00:29:46.681 18:31:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:29:46.939 [2024-11-26 18:31:21.171700] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:29:46.939 [2024-11-26 18:31:21.171839] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82417 ] 00:29:46.939 [2024-11-26 18:31:21.335659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.197 [2024-11-26 18:31:21.435138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.575  [2024-11-26T18:31:23.972Z] Copying: 204/1024 [MB] (204 MBps) [2024-11-26T18:31:24.908Z] Copying: 406/1024 [MB] (202 MBps) [2024-11-26T18:31:25.844Z] Copying: 608/1024 [MB] (201 MBps) [2024-11-26T18:31:26.780Z] Copying: 794/1024 [MB] (186 MBps) [2024-11-26T18:31:27.039Z] Copying: 990/1024 [MB] (196 MBps) [2024-11-26T18:31:27.973Z] Copying: 1024/1024 [MB] (average 197 MBps) 00:29:53.512 00:29:53.512 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81488 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:29:53.512 18:31:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:53.512 [2024-11-26 18:31:27.888273] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:29:53.512 [2024-11-26 18:31:27.888494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82488 ] 00:29:53.771 [2024-11-26 18:31:28.072816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.771 [2024-11-26 18:31:28.171706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.030 [2024-11-26 18:31:28.486389] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:54.030 [2024-11-26 18:31:28.486461] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:54.289 [2024-11-26 18:31:28.552047] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:29:54.289 [2024-11-26 18:31:28.552542] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:29:54.289 [2024-11-26 18:31:28.552801] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:29:54.548 [2024-11-26 18:31:28.839892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.548 [2024-11-26 18:31:28.839941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:54.548 [2024-11-26 18:31:28.839961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:54.548 [2024-11-26 18:31:28.839992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.548 [2024-11-26 18:31:28.840066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.548 [2024-11-26 18:31:28.840084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:54.548 [2024-11-26 18:31:28.840095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:29:54.548 [2024-11-26 18:31:28.840106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.548 [2024-11-26 18:31:28.840149] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:54.548 [2024-11-26 18:31:28.841043] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:54.548 [2024-11-26 18:31:28.841073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.548 [2024-11-26 18:31:28.841085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:54.548 [2024-11-26 18:31:28.841097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.946 ms 00:29:54.548 [2024-11-26 18:31:28.841107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.548 [2024-11-26 18:31:28.843054] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:54.548 [2024-11-26 18:31:28.857065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.548 [2024-11-26 18:31:28.857105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:54.548 [2024-11-26 18:31:28.857137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.013 ms 00:29:54.548 [2024-11-26 18:31:28.857148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.548 [2024-11-26 18:31:28.857217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.548 [2024-11-26 18:31:28.857235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:54.548 [2024-11-26 18:31:28.857246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:54.548 [2024-11-26 18:31:28.857256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.548 [2024-11-26 18:31:28.865953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.548 [2024-11-26 18:31:28.865991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:54.548 [2024-11-26 18:31:28.866021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.627 ms 00:29:54.548 [2024-11-26 18:31:28.866032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.548 [2024-11-26 18:31:28.866124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.548 [2024-11-26 18:31:28.866141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:54.548 [2024-11-26 18:31:28.866153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:29:54.549 [2024-11-26 18:31:28.866162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.549 [2024-11-26 18:31:28.866234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.549 [2024-11-26 18:31:28.866251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:54.549 [2024-11-26 18:31:28.866262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:54.549 [2024-11-26 18:31:28.866271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.549 [2024-11-26 18:31:28.866302] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:54.549 [2024-11-26 18:31:28.870949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.549 [2024-11-26 18:31:28.870983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:54.549 [2024-11-26 18:31:28.871033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.655 ms 00:29:54.549 [2024-11-26 18:31:28.871044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.549 [2024-11-26 18:31:28.871077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.549 [2024-11-26 18:31:28.871090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:54.549 [2024-11-26 18:31:28.871101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:54.549 [2024-11-26 18:31:28.871111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.549 [2024-11-26 18:31:28.871158] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:54.549 [2024-11-26 18:31:28.871193] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:54.549 [2024-11-26 18:31:28.871231] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:54.549 [2024-11-26 18:31:28.871249] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:54.549 [2024-11-26 18:31:28.871343] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:54.549 [2024-11-26 18:31:28.871357] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:54.549 [2024-11-26 18:31:28.871370] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:54.549 [2024-11-26 18:31:28.871388] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:54.549 [2024-11-26 18:31:28.871401] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:54.549 [2024-11-26 18:31:28.871411] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:54.549 [2024-11-26 18:31:28.871422] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:54.549 [2024-11-26 18:31:28.871432] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:54.549 [2024-11-26 18:31:28.871446] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:54.549 [2024-11-26 18:31:28.871457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.549 [2024-11-26 18:31:28.871472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:54.549 [2024-11-26 18:31:28.871482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:29:54.549 [2024-11-26 18:31:28.871492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.549 [2024-11-26 18:31:28.871607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.549 [2024-11-26 18:31:28.871630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:54.549 [2024-11-26 18:31:28.871641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:29:54.549 [2024-11-26 18:31:28.871651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.549 [2024-11-26 18:31:28.871779] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:54.549 [2024-11-26 18:31:28.871799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:54.549 [2024-11-26 18:31:28.871810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:54.549 [2024-11-26 18:31:28.871829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:54.549 [2024-11-26 18:31:28.871840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:54.549 [2024-11-26 18:31:28.871850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:54.549 [2024-11-26 18:31:28.871859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:54.549 [2024-11-26 18:31:28.871868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:54.549 [2024-11-26 18:31:28.871878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:54.549 [2024-11-26 18:31:28.871900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:54.549 [2024-11-26 18:31:28.871909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:54.549 [2024-11-26 18:31:28.871918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:54.549 [2024-11-26 18:31:28.871927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:54.549 [2024-11-26 18:31:28.871937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:54.549 [2024-11-26 18:31:28.871949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:54.549 [2024-11-26 18:31:28.871974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:54.549 [2024-11-26 18:31:28.871984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:54.549 [2024-11-26 18:31:28.871993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:54.549 [2024-11-26 18:31:28.872003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:54.549 [2024-11-26 18:31:28.872013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:54.549 [2024-11-26 18:31:28.872038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:54.549 [2024-11-26 18:31:28.872048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:54.549 [2024-11-26 18:31:28.872058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:54.549 [2024-11-26 18:31:28.872068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:54.549 [2024-11-26 18:31:28.872077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:54.549 [2024-11-26 18:31:28.872087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:54.549 [2024-11-26 18:31:28.872096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:54.549 [2024-11-26 18:31:28.872105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:54.549 [2024-11-26 18:31:28.872114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:54.549 [2024-11-26 18:31:28.872124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:54.549 [2024-11-26 18:31:28.872133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:54.549 [2024-11-26 18:31:28.872142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:54.549 [2024-11-26 18:31:28.872151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:54.549 [2024-11-26 18:31:28.872160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:54.549 [2024-11-26 18:31:28.872170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:54.549 [2024-11-26 18:31:28.872179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:54.549 [2024-11-26 18:31:28.872188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:54.549 [2024-11-26 18:31:28.872198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:54.549 [2024-11-26 18:31:28.872206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:54.549 [2024-11-26 18:31:28.872215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:54.549 [2024-11-26 18:31:28.872224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:54.549 [2024-11-26 18:31:28.872233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:54.549 [2024-11-26 18:31:28.872242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:54.549 [2024-11-26 18:31:28.872250] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:54.549 [2024-11-26 18:31:28.872261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:54.549 [2024-11-26 18:31:28.872284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:54.549 [2024-11-26 18:31:28.872297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:54.549 [2024-11-26 18:31:28.872308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:54.549 [2024-11-26 18:31:28.872317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:54.549 [2024-11-26 18:31:28.872326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:54.549 [2024-11-26 18:31:28.872336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:54.549 [2024-11-26 18:31:28.872345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:54.549 [2024-11-26 18:31:28.872355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:54.549 [2024-11-26 18:31:28.872366] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:54.549 [2024-11-26 18:31:28.872380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:54.549 [2024-11-26 18:31:28.872391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:54.549 [2024-11-26 18:31:28.872401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:54.549 [2024-11-26 18:31:28.872420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:54.549 [2024-11-26 18:31:28.872430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:54.549 [2024-11-26 18:31:28.872447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:54.549 [2024-11-26 18:31:28.872456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:54.549 [2024-11-26 18:31:28.872466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:54.549 [2024-11-26 18:31:28.872475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:54.549 [2024-11-26 18:31:28.872484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:54.549 [2024-11-26 18:31:28.872494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:54.550 [2024-11-26 18:31:28.872503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:54.550 [2024-11-26 18:31:28.872513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:54.550 [2024-11-26 18:31:28.872522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:54.550 [2024-11-26 18:31:28.872532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:54.550 [2024-11-26 18:31:28.872541] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:54.550 [2024-11-26 18:31:28.872564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:54.550 [2024-11-26 18:31:28.872579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:54.550 [2024-11-26 18:31:28.872589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:54.550 [2024-11-26 18:31:28.872599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:54.550 [2024-11-26 18:31:28.872609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:54.550 [2024-11-26 18:31:28.872620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.550 [2024-11-26 18:31:28.872631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:54.550 [2024-11-26 18:31:28.872642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:29:54.550 [2024-11-26 18:31:28.872653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.550 [2024-11-26 18:31:28.909709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.550 [2024-11-26 18:31:28.909780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:54.550 [2024-11-26 18:31:28.909802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.990 ms 00:29:54.550 [2024-11-26 18:31:28.909813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.550 [2024-11-26 18:31:28.909945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.550 [2024-11-26 18:31:28.909960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:54.550 [2024-11-26 18:31:28.909973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:29:54.550 [2024-11-26 18:31:28.909983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.550 [2024-11-26 18:31:28.962415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.550 [2024-11-26 18:31:28.962484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:54.550 [2024-11-26 18:31:28.962513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.331 ms 00:29:54.550 [2024-11-26 18:31:28.962526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.550 [2024-11-26 18:31:28.962623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.550 [2024-11-26 18:31:28.962643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:54.550 [2024-11-26 18:31:28.962658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:54.550 [2024-11-26 18:31:28.962681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.550 [2024-11-26 18:31:28.963473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.550 [2024-11-26 18:31:28.963647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:54.550 [2024-11-26 18:31:28.963675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:29:54.550 [2024-11-26 18:31:28.963698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.550 [2024-11-26 18:31:28.963897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.550 [2024-11-26 18:31:28.963916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:54.550 [2024-11-26 18:31:28.963929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:29:54.550 [2024-11-26 18:31:28.963941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.550 [2024-11-26 18:31:28.983246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.550 [2024-11-26 18:31:28.983520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:54.550 [2024-11-26 18:31:28.983666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.275 ms 00:29:54.550 [2024-11-26 18:31:28.983715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.550 [2024-11-26 18:31:28.998805] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:54.550 [2024-11-26 18:31:28.999016] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:54.550 [2024-11-26 18:31:28.999190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.550 [2024-11-26 18:31:28.999285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:54.550 [2024-11-26 18:31:28.999332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.219 ms 00:29:54.550 [2024-11-26 18:31:28.999441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.808 [2024-11-26 18:31:29.026466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.808 [2024-11-26 18:31:29.026747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:54.808 [2024-11-26 18:31:29.026867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.920 ms 00:29:54.808 [2024-11-26 18:31:29.026920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.808 [2024-11-26 18:31:29.041470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.808 [2024-11-26 18:31:29.041704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:54.808 [2024-11-26 18:31:29.041819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.350 ms 00:29:54.808 [2024-11-26 18:31:29.041868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.808 [2024-11-26 18:31:29.056437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.808 [2024-11-26 18:31:29.056684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:54.808 [2024-11-26 18:31:29.056800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.379 ms 00:29:54.808 [2024-11-26 18:31:29.056858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.808 [2024-11-26 18:31:29.057850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.808 [2024-11-26 18:31:29.057919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:54.808 [2024-11-26 18:31:29.058055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:29:54.808 [2024-11-26 18:31:29.058102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.808 [2024-11-26 18:31:29.126579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.808 [2024-11-26 18:31:29.127006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:54.808 [2024-11-26 18:31:29.127122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.355 ms 00:29:54.808 [2024-11-26 18:31:29.127236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.808 [2024-11-26 18:31:29.137919] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:54.808 [2024-11-26 18:31:29.142208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.808 [2024-11-26 18:31:29.142385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:54.808 [2024-11-26 18:31:29.142505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.866 ms 00:29:54.808 [2024-11-26 18:31:29.142577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.808 [2024-11-26 18:31:29.142940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.808 [2024-11-26 18:31:29.143059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:54.808 [2024-11-26 18:31:29.143183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:54.808 [2024-11-26 18:31:29.143230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.808 [2024-11-26 18:31:29.143449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.808 [2024-11-26 18:31:29.143579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:54.808 [2024-11-26 18:31:29.143680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:54.808 [2024-11-26 18:31:29.143726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.808 [2024-11-26 18:31:29.143851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.808 [2024-11-26 18:31:29.143875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:54.808 [2024-11-26 18:31:29.143889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:54.808 [2024-11-26 18:31:29.143899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.808 [2024-11-26 18:31:29.143948] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:54.808 [2024-11-26 18:31:29.143966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.808 [2024-11-26 18:31:29.143977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:54.808 [2024-11-26 18:31:29.143988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:54.808 [2024-11-26 18:31:29.144005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.808 [2024-11-26 18:31:29.171992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.808 [2024-11-26 18:31:29.172060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:54.808 [2024-11-26 18:31:29.172079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.957 ms 00:29:54.808 [2024-11-26 18:31:29.172090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.808 [2024-11-26 18:31:29.172188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:54.808 [2024-11-26 18:31:29.172206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:54.808 [2024-11-26 18:31:29.172218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:29:54.808 [2024-11-26 18:31:29.172227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.808 [2024-11-26 18:31:29.173848] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 333.311 ms, result 0 00:29:55.744  [2024-11-26T18:31:31.577Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-26T18:31:32.512Z] Copying: 46/1024 [MB] (23 MBps) [2024-11-26T18:31:33.493Z] Copying: 69/1024 [MB] (23 MBps) [2024-11-26T18:31:34.430Z] Copying: 93/1024 [MB] (23 MBps) [2024-11-26T18:31:35.366Z] Copying: 116/1024 [MB] (23 MBps) [2024-11-26T18:31:36.301Z] Copying: 140/1024 [MB] (23 MBps) [2024-11-26T18:31:37.238Z] Copying: 163/1024 [MB] (23 MBps) [2024-11-26T18:31:38.614Z] Copying: 187/1024 [MB] (23 MBps) [2024-11-26T18:31:39.549Z] Copying: 211/1024 [MB] (24 MBps) [2024-11-26T18:31:40.483Z] Copying: 234/1024 [MB] (23 MBps) [2024-11-26T18:31:41.419Z] Copying: 258/1024 [MB] (23 MBps) [2024-11-26T18:31:42.355Z] Copying: 282/1024 [MB] (24 MBps) [2024-11-26T18:31:43.290Z] Copying: 306/1024 [MB] (24 MBps) [2024-11-26T18:31:44.224Z] Copying: 329/1024 [MB] (23 MBps) [2024-11-26T18:31:45.598Z] Copying: 353/1024 [MB] (23 MBps) [2024-11-26T18:31:46.532Z] Copying: 376/1024 [MB] (23 MBps) [2024-11-26T18:31:47.467Z] Copying: 401/1024 [MB] (24 MBps) [2024-11-26T18:31:48.428Z] Copying: 425/1024 [MB] (24 MBps) [2024-11-26T18:31:49.366Z] Copying: 448/1024 [MB] (23 MBps) [2024-11-26T18:31:50.307Z] Copying: 472/1024 [MB] (23 MBps) [2024-11-26T18:31:51.244Z] Copying: 496/1024 [MB] (23 MBps) [2024-11-26T18:31:52.621Z] Copying: 519/1024 [MB] (23 MBps) [2024-11-26T18:31:53.190Z] Copying: 542/1024 [MB] (23 MBps) [2024-11-26T18:31:54.569Z] Copying: 565/1024 [MB] (23 MBps) [2024-11-26T18:31:55.505Z] Copying: 589/1024 [MB] (23 MBps) [2024-11-26T18:31:56.442Z] Copying: 612/1024 [MB] (23 MBps) [2024-11-26T18:31:57.379Z] Copying: 635/1024 [MB] (23 MBps) [2024-11-26T18:31:58.316Z] Copying: 659/1024 [MB] (23 MBps) [2024-11-26T18:31:59.254Z] Copying: 681/1024 [MB] (22 MBps) [2024-11-26T18:32:00.190Z] Copying: 704/1024 [MB] (22 MBps) [2024-11-26T18:32:01.568Z] Copying: 727/1024 [MB] (23 MBps) [2024-11-26T18:32:02.505Z] Copying: 751/1024 [MB] (23 MBps) [2024-11-26T18:32:03.441Z] Copying: 774/1024 [MB] (23 MBps) [2024-11-26T18:32:04.437Z] Copying: 797/1024 [MB] (23 MBps) [2024-11-26T18:32:05.373Z] Copying: 820/1024 [MB] (23 MBps) [2024-11-26T18:32:06.308Z] Copying: 843/1024 [MB] (22 MBps) [2024-11-26T18:32:07.244Z] Copying: 866/1024 [MB] (23 MBps) [2024-11-26T18:32:08.620Z] Copying: 890/1024 [MB] (23 MBps) [2024-11-26T18:32:09.555Z] Copying: 913/1024 [MB] (23 MBps) [2024-11-26T18:32:10.492Z] Copying: 937/1024 [MB] (23 MBps) [2024-11-26T18:32:11.431Z] Copying: 960/1024 [MB] (23 MBps) [2024-11-26T18:32:12.366Z] Copying: 983/1024 [MB] (23 MBps) [2024-11-26T18:32:13.304Z] Copying: 1007/1024 [MB] (23 MBps) [2024-11-26T18:32:14.243Z] Copying: 1023/1024 [MB] (16 MBps) [2024-11-26T18:32:14.243Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-26 18:32:13.885608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.782 [2024-11-26 18:32:13.885733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:39.782 [2024-11-26 18:32:13.885772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:39.782 [2024-11-26 18:32:13.885785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.782 [2024-11-26 18:32:13.888259] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:39.782 [2024-11-26 18:32:13.893073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.782 [2024-11-26 18:32:13.893228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:39.782 [2024-11-26 18:32:13.893374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.566 ms 00:30:39.782 [2024-11-26 18:32:13.893444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.782 [2024-11-26 18:32:13.905931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.782 [2024-11-26 18:32:13.906107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:39.782 [2024-11-26 18:32:13.906219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.272 ms 00:30:39.782 [2024-11-26 18:32:13.906265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.782 [2024-11-26 18:32:13.927575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.782 [2024-11-26 18:32:13.927775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:39.782 [2024-11-26 18:32:13.927892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.245 ms 00:30:39.782 [2024-11-26 18:32:13.927913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.782 [2024-11-26 18:32:13.933583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.782 [2024-11-26 18:32:13.933613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:39.782 [2024-11-26 18:32:13.933627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.578 ms 00:30:39.782 [2024-11-26 18:32:13.933638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.782 [2024-11-26 18:32:13.962606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.782 [2024-11-26 18:32:13.962703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:39.782 [2024-11-26 18:32:13.962739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.924 ms 00:30:39.782 [2024-11-26 18:32:13.962751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.782 [2024-11-26 18:32:13.980409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.782 [2024-11-26 18:32:13.980449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:39.782 [2024-11-26 18:32:13.980481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.613 ms 00:30:39.782 [2024-11-26 18:32:13.980492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.782 [2024-11-26 18:32:14.087355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.782 [2024-11-26 18:32:14.087439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:39.782 [2024-11-26 18:32:14.087467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.822 ms 00:30:39.782 [2024-11-26 18:32:14.087479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.782 [2024-11-26 18:32:14.114530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.782 [2024-11-26 18:32:14.114612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:39.782 [2024-11-26 18:32:14.114644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.016 ms 00:30:39.782 [2024-11-26 18:32:14.114682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.782 [2024-11-26 18:32:14.141250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.782 [2024-11-26 18:32:14.141287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:39.782 [2024-11-26 18:32:14.141318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.502 ms 00:30:39.782 [2024-11-26 18:32:14.141328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.782 [2024-11-26 18:32:14.167086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.782 [2024-11-26 18:32:14.167302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:39.782 [2024-11-26 18:32:14.167333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.722 ms 00:30:39.782 [2024-11-26 18:32:14.167347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.782 [2024-11-26 18:32:14.193323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.782 [2024-11-26 18:32:14.193360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:39.782 [2024-11-26 18:32:14.193390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.891 ms 00:30:39.782 [2024-11-26 18:32:14.193400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.782 [2024-11-26 18:32:14.193438] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:39.782 [2024-11-26 18:32:14.193459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 114432 / 261120 wr_cnt: 1 state: open 00:30:39.782 [2024-11-26 18:32:14.193472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:39.782 [2024-11-26 18:32:14.193844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.193854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.193864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.193874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.193885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.193896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.193906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.193916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.193927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.193937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.193963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.193974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.193983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.193993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:39.783 [2024-11-26 18:32:14.194577] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:39.783 [2024-11-26 18:32:14.194597] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 77389958-60a0-4bcf-8661-63f2fd98ce2c 00:30:39.783 [2024-11-26 18:32:14.194624] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 114432 00:30:39.783 [2024-11-26 18:32:14.194634] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 115392 00:30:39.783 [2024-11-26 18:32:14.194644] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 114432 00:30:39.783 [2024-11-26 18:32:14.194664] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0084 00:30:39.783 [2024-11-26 18:32:14.194674] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:39.783 [2024-11-26 18:32:14.194684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:39.783 [2024-11-26 18:32:14.194702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:39.783 [2024-11-26 18:32:14.194728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:39.783 [2024-11-26 18:32:14.194739] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:39.783 [2024-11-26 18:32:14.194750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.783 [2024-11-26 18:32:14.194762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:39.783 [2024-11-26 18:32:14.194774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.314 ms 00:30:39.783 [2024-11-26 18:32:14.194785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.783 [2024-11-26 18:32:14.210331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.783 [2024-11-26 18:32:14.210364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:39.783 [2024-11-26 18:32:14.210395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.499 ms 00:30:39.783 [2024-11-26 18:32:14.210405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.783 [2024-11-26 18:32:14.210939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.783 [2024-11-26 18:32:14.210964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:39.783 [2024-11-26 18:32:14.210986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:30:39.783 [2024-11-26 18:32:14.210998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.043 [2024-11-26 18:32:14.249498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.043 [2024-11-26 18:32:14.249543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:40.043 [2024-11-26 18:32:14.249605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.043 [2024-11-26 18:32:14.249617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.043 [2024-11-26 18:32:14.249679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.043 [2024-11-26 18:32:14.249709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:40.043 [2024-11-26 18:32:14.249726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.043 [2024-11-26 18:32:14.249736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.043 [2024-11-26 18:32:14.249815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.043 [2024-11-26 18:32:14.249834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:40.043 [2024-11-26 18:32:14.249845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.043 [2024-11-26 18:32:14.249855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.043 [2024-11-26 18:32:14.249876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.043 [2024-11-26 18:32:14.249889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:40.043 [2024-11-26 18:32:14.249899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.043 [2024-11-26 18:32:14.249909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.043 [2024-11-26 18:32:14.341548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.043 [2024-11-26 18:32:14.341638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:40.043 [2024-11-26 18:32:14.341673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.043 [2024-11-26 18:32:14.341684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.043 [2024-11-26 18:32:14.413409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.043 [2024-11-26 18:32:14.413693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:40.043 [2024-11-26 18:32:14.413721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.043 [2024-11-26 18:32:14.413744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.043 [2024-11-26 18:32:14.413846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.043 [2024-11-26 18:32:14.413863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:40.043 [2024-11-26 18:32:14.413875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.043 [2024-11-26 18:32:14.413886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.043 [2024-11-26 18:32:14.413936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.043 [2024-11-26 18:32:14.413951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:40.043 [2024-11-26 18:32:14.413963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.043 [2024-11-26 18:32:14.413973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.043 [2024-11-26 18:32:14.414126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.043 [2024-11-26 18:32:14.414144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:40.043 [2024-11-26 18:32:14.414156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.043 [2024-11-26 18:32:14.414167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.043 [2024-11-26 18:32:14.414211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.043 [2024-11-26 18:32:14.414227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:40.043 [2024-11-26 18:32:14.414238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.043 [2024-11-26 18:32:14.414248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.043 [2024-11-26 18:32:14.414297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.043 [2024-11-26 18:32:14.414312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:40.043 [2024-11-26 18:32:14.414323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.043 [2024-11-26 18:32:14.414348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.043 [2024-11-26 18:32:14.414412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:40.043 [2024-11-26 18:32:14.414427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:40.043 [2024-11-26 18:32:14.414437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:40.043 [2024-11-26 18:32:14.414446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:40.043 [2024-11-26 18:32:14.414579] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 532.485 ms, result 0 00:30:41.419 00:30:41.419 00:30:41.419 18:32:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:43.322 18:32:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:43.322 [2024-11-26 18:32:17.595065] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:30:43.322 [2024-11-26 18:32:17.595530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82961 ] 00:30:43.322 [2024-11-26 18:32:17.770908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.580 [2024-11-26 18:32:17.911734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.840 [2024-11-26 18:32:18.240657] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:43.840 [2024-11-26 18:32:18.240750] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:44.109 [2024-11-26 18:32:18.402352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.109 [2024-11-26 18:32:18.402428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:44.109 [2024-11-26 18:32:18.402464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:44.109 [2024-11-26 18:32:18.402476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.109 [2024-11-26 18:32:18.402535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.109 [2024-11-26 18:32:18.402555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:44.109 [2024-11-26 18:32:18.402587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:44.109 [2024-11-26 18:32:18.402618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.109 [2024-11-26 18:32:18.402649] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:44.109 [2024-11-26 18:32:18.403498] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:44.109 [2024-11-26 18:32:18.403521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.109 [2024-11-26 18:32:18.403533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:44.109 [2024-11-26 18:32:18.403545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:30:44.109 [2024-11-26 18:32:18.403555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.109 [2024-11-26 18:32:18.405571] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:44.109 [2024-11-26 18:32:18.419671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.109 [2024-11-26 18:32:18.419712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:44.109 [2024-11-26 18:32:18.419744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.102 ms 00:30:44.109 [2024-11-26 18:32:18.419755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.109 [2024-11-26 18:32:18.419823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.109 [2024-11-26 18:32:18.419842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:44.109 [2024-11-26 18:32:18.419853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:30:44.109 [2024-11-26 18:32:18.419864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.109 [2024-11-26 18:32:18.428699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.109 [2024-11-26 18:32:18.428738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:44.109 [2024-11-26 18:32:18.428768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.754 ms 00:30:44.109 [2024-11-26 18:32:18.428784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.109 [2024-11-26 18:32:18.428874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.109 [2024-11-26 18:32:18.428892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:44.109 [2024-11-26 18:32:18.428904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:30:44.109 [2024-11-26 18:32:18.428914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.109 [2024-11-26 18:32:18.428967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.109 [2024-11-26 18:32:18.428983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:44.109 [2024-11-26 18:32:18.428994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:44.109 [2024-11-26 18:32:18.429005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.109 [2024-11-26 18:32:18.429042] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:44.109 [2024-11-26 18:32:18.433740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.109 [2024-11-26 18:32:18.433997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:44.109 [2024-11-26 18:32:18.434031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.707 ms 00:30:44.109 [2024-11-26 18:32:18.434042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.109 [2024-11-26 18:32:18.434083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.109 [2024-11-26 18:32:18.434099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:44.109 [2024-11-26 18:32:18.434112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:44.109 [2024-11-26 18:32:18.434122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.109 [2024-11-26 18:32:18.434192] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:44.109 [2024-11-26 18:32:18.434224] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:44.109 [2024-11-26 18:32:18.434265] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:44.109 [2024-11-26 18:32:18.434304] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:44.109 [2024-11-26 18:32:18.434402] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:44.109 [2024-11-26 18:32:18.434417] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:44.109 [2024-11-26 18:32:18.434432] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:44.109 [2024-11-26 18:32:18.434445] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:44.109 [2024-11-26 18:32:18.434458] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:44.110 [2024-11-26 18:32:18.434469] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:44.110 [2024-11-26 18:32:18.434480] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:44.110 [2024-11-26 18:32:18.434526] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:44.110 [2024-11-26 18:32:18.434537] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:44.110 [2024-11-26 18:32:18.434548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.110 [2024-11-26 18:32:18.434558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:44.110 [2024-11-26 18:32:18.434569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:30:44.110 [2024-11-26 18:32:18.434578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.110 [2024-11-26 18:32:18.434745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.110 [2024-11-26 18:32:18.434764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:44.110 [2024-11-26 18:32:18.434777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:30:44.110 [2024-11-26 18:32:18.434788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.110 [2024-11-26 18:32:18.434904] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:44.110 [2024-11-26 18:32:18.434924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:44.110 [2024-11-26 18:32:18.434936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:44.110 [2024-11-26 18:32:18.434947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.110 [2024-11-26 18:32:18.434957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:44.110 [2024-11-26 18:32:18.434967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:44.110 [2024-11-26 18:32:18.434977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:44.110 [2024-11-26 18:32:18.435002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:44.110 [2024-11-26 18:32:18.435011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:44.110 [2024-11-26 18:32:18.435021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:44.110 [2024-11-26 18:32:18.435045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:44.110 [2024-11-26 18:32:18.435055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:44.110 [2024-11-26 18:32:18.435080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:44.110 [2024-11-26 18:32:18.435103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:44.110 [2024-11-26 18:32:18.435114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:44.110 [2024-11-26 18:32:18.435125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.110 [2024-11-26 18:32:18.435135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:44.110 [2024-11-26 18:32:18.435146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:44.110 [2024-11-26 18:32:18.435156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.110 [2024-11-26 18:32:18.435166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:44.110 [2024-11-26 18:32:18.435175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:44.110 [2024-11-26 18:32:18.435185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:44.110 [2024-11-26 18:32:18.435195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:44.110 [2024-11-26 18:32:18.435205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:44.110 [2024-11-26 18:32:18.435215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:44.110 [2024-11-26 18:32:18.435224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:44.110 [2024-11-26 18:32:18.435234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:44.110 [2024-11-26 18:32:18.435244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:44.110 [2024-11-26 18:32:18.435253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:44.110 [2024-11-26 18:32:18.435264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:44.110 [2024-11-26 18:32:18.435273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:44.110 [2024-11-26 18:32:18.435283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:44.110 [2024-11-26 18:32:18.435293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:44.110 [2024-11-26 18:32:18.435302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:44.110 [2024-11-26 18:32:18.435312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:44.110 [2024-11-26 18:32:18.435322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:44.110 [2024-11-26 18:32:18.435332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:44.110 [2024-11-26 18:32:18.435341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:44.110 [2024-11-26 18:32:18.435351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:44.110 [2024-11-26 18:32:18.435360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.110 [2024-11-26 18:32:18.435370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:44.110 [2024-11-26 18:32:18.435379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:44.110 [2024-11-26 18:32:18.435388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.110 [2024-11-26 18:32:18.435398] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:44.110 [2024-11-26 18:32:18.435409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:44.110 [2024-11-26 18:32:18.435420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:44.110 [2024-11-26 18:32:18.435431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.110 [2024-11-26 18:32:18.435441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:44.110 [2024-11-26 18:32:18.435453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:44.110 [2024-11-26 18:32:18.435464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:44.110 [2024-11-26 18:32:18.435474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:44.110 [2024-11-26 18:32:18.435484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:44.110 [2024-11-26 18:32:18.435494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:44.110 [2024-11-26 18:32:18.435506] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:44.110 [2024-11-26 18:32:18.435519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:44.110 [2024-11-26 18:32:18.435537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:44.110 [2024-11-26 18:32:18.435547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:44.110 [2024-11-26 18:32:18.435558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:44.110 [2024-11-26 18:32:18.435569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:44.110 [2024-11-26 18:32:18.435579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:44.110 [2024-11-26 18:32:18.435590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:44.110 [2024-11-26 18:32:18.435601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:44.110 [2024-11-26 18:32:18.435611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:44.110 [2024-11-26 18:32:18.435638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:44.110 [2024-11-26 18:32:18.435649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:44.110 [2024-11-26 18:32:18.435661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:44.110 [2024-11-26 18:32:18.435671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:44.110 [2024-11-26 18:32:18.435681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:44.110 [2024-11-26 18:32:18.435692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:44.110 [2024-11-26 18:32:18.435703] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:44.110 [2024-11-26 18:32:18.435715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:44.110 [2024-11-26 18:32:18.435727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:44.110 [2024-11-26 18:32:18.435738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:44.110 [2024-11-26 18:32:18.435749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:44.110 [2024-11-26 18:32:18.435760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:44.110 [2024-11-26 18:32:18.435771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.110 [2024-11-26 18:32:18.435782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:44.110 [2024-11-26 18:32:18.435794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.929 ms 00:30:44.110 [2024-11-26 18:32:18.435805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.110 [2024-11-26 18:32:18.471369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.110 [2024-11-26 18:32:18.471434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:44.110 [2024-11-26 18:32:18.471469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.501 ms 00:30:44.110 [2024-11-26 18:32:18.471487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.110 [2024-11-26 18:32:18.471631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.110 [2024-11-26 18:32:18.471649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:44.110 [2024-11-26 18:32:18.471662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:30:44.110 [2024-11-26 18:32:18.471673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.110 [2024-11-26 18:32:18.519663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.111 [2024-11-26 18:32:18.519722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:44.111 [2024-11-26 18:32:18.519741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.872 ms 00:30:44.111 [2024-11-26 18:32:18.519754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.111 [2024-11-26 18:32:18.519817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.111 [2024-11-26 18:32:18.519835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:44.111 [2024-11-26 18:32:18.519855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:44.111 [2024-11-26 18:32:18.519880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.111 [2024-11-26 18:32:18.520542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.111 [2024-11-26 18:32:18.520576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:44.111 [2024-11-26 18:32:18.520593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:30:44.111 [2024-11-26 18:32:18.520605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.111 [2024-11-26 18:32:18.520778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.111 [2024-11-26 18:32:18.520828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:44.111 [2024-11-26 18:32:18.520849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:30:44.111 [2024-11-26 18:32:18.520861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.111 [2024-11-26 18:32:18.540686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.111 [2024-11-26 18:32:18.540747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:44.111 [2024-11-26 18:32:18.540780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.768 ms 00:30:44.111 [2024-11-26 18:32:18.540793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.111 [2024-11-26 18:32:18.555322] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:30:44.111 [2024-11-26 18:32:18.555378] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:44.111 [2024-11-26 18:32:18.555411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.111 [2024-11-26 18:32:18.555422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:44.111 [2024-11-26 18:32:18.555434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.469 ms 00:30:44.111 [2024-11-26 18:32:18.555444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.369 [2024-11-26 18:32:18.579553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.369 [2024-11-26 18:32:18.579615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:44.369 [2024-11-26 18:32:18.579648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.066 ms 00:30:44.369 [2024-11-26 18:32:18.579659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.369 [2024-11-26 18:32:18.592474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.369 [2024-11-26 18:32:18.592530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:44.369 [2024-11-26 18:32:18.592560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.772 ms 00:30:44.369 [2024-11-26 18:32:18.592578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.369 [2024-11-26 18:32:18.605081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.369 [2024-11-26 18:32:18.605136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:44.369 [2024-11-26 18:32:18.605166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.462 ms 00:30:44.369 [2024-11-26 18:32:18.605176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.369 [2024-11-26 18:32:18.606062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.369 [2024-11-26 18:32:18.606088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:44.369 [2024-11-26 18:32:18.606107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:30:44.369 [2024-11-26 18:32:18.606118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.369 [2024-11-26 18:32:18.674794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.369 [2024-11-26 18:32:18.674905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:44.369 [2024-11-26 18:32:18.674937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.651 ms 00:30:44.369 [2024-11-26 18:32:18.674950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.369 [2024-11-26 18:32:18.685837] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:44.369 [2024-11-26 18:32:18.689348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.369 [2024-11-26 18:32:18.689415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:44.369 [2024-11-26 18:32:18.689445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.325 ms 00:30:44.369 [2024-11-26 18:32:18.689456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.369 [2024-11-26 18:32:18.689578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.369 [2024-11-26 18:32:18.689599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:44.369 [2024-11-26 18:32:18.689632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:44.369 [2024-11-26 18:32:18.689664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.369 [2024-11-26 18:32:18.691681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.369 [2024-11-26 18:32:18.691735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:44.369 [2024-11-26 18:32:18.691764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.943 ms 00:30:44.369 [2024-11-26 18:32:18.691774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.369 [2024-11-26 18:32:18.691808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.369 [2024-11-26 18:32:18.691823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:44.369 [2024-11-26 18:32:18.691835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:44.369 [2024-11-26 18:32:18.691845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.369 [2024-11-26 18:32:18.691891] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:44.369 [2024-11-26 18:32:18.691907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.369 [2024-11-26 18:32:18.691917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:44.369 [2024-11-26 18:32:18.691928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:30:44.369 [2024-11-26 18:32:18.691938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.369 [2024-11-26 18:32:18.717771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.369 [2024-11-26 18:32:18.717830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:44.369 [2024-11-26 18:32:18.717867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.778 ms 00:30:44.369 [2024-11-26 18:32:18.717878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.369 [2024-11-26 18:32:18.717961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.369 [2024-11-26 18:32:18.717979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:44.369 [2024-11-26 18:32:18.717991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:44.369 [2024-11-26 18:32:18.718002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.369 [2024-11-26 18:32:18.719670] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 316.644 ms, result 0 00:30:45.746  [2024-11-26T18:32:21.139Z] Copying: 952/1048576 [kB] (952 kBps) [2024-11-26T18:32:22.074Z] Copying: 4700/1048576 [kB] (3748 kBps) [2024-11-26T18:32:23.012Z] Copying: 25/1024 [MB] (20 MBps) [2024-11-26T18:32:23.951Z] Copying: 52/1024 [MB] (27 MBps) [2024-11-26T18:32:25.327Z] Copying: 80/1024 [MB] (28 MBps) [2024-11-26T18:32:26.262Z] Copying: 108/1024 [MB] (27 MBps) [2024-11-26T18:32:27.198Z] Copying: 136/1024 [MB] (27 MBps) [2024-11-26T18:32:28.135Z] Copying: 163/1024 [MB] (27 MBps) [2024-11-26T18:32:29.073Z] Copying: 192/1024 [MB] (28 MBps) [2024-11-26T18:32:30.091Z] Copying: 221/1024 [MB] (28 MBps) [2024-11-26T18:32:31.044Z] Copying: 248/1024 [MB] (27 MBps) [2024-11-26T18:32:31.981Z] Copying: 276/1024 [MB] (28 MBps) [2024-11-26T18:32:32.919Z] Copying: 304/1024 [MB] (28 MBps) [2024-11-26T18:32:34.294Z] Copying: 333/1024 [MB] (28 MBps) [2024-11-26T18:32:35.231Z] Copying: 361/1024 [MB] (27 MBps) [2024-11-26T18:32:36.168Z] Copying: 389/1024 [MB] (27 MBps) [2024-11-26T18:32:37.102Z] Copying: 416/1024 [MB] (27 MBps) [2024-11-26T18:32:38.038Z] Copying: 444/1024 [MB] (28 MBps) [2024-11-26T18:32:38.976Z] Copying: 472/1024 [MB] (27 MBps) [2024-11-26T18:32:39.912Z] Copying: 500/1024 [MB] (28 MBps) [2024-11-26T18:32:41.290Z] Copying: 528/1024 [MB] (27 MBps) [2024-11-26T18:32:42.227Z] Copying: 556/1024 [MB] (27 MBps) [2024-11-26T18:32:43.163Z] Copying: 584/1024 [MB] (28 MBps) [2024-11-26T18:32:44.099Z] Copying: 613/1024 [MB] (28 MBps) [2024-11-26T18:32:45.037Z] Copying: 641/1024 [MB] (28 MBps) [2024-11-26T18:32:45.974Z] Copying: 669/1024 [MB] (27 MBps) [2024-11-26T18:32:46.911Z] Copying: 697/1024 [MB] (28 MBps) [2024-11-26T18:32:48.311Z] Copying: 726/1024 [MB] (28 MBps) [2024-11-26T18:32:49.246Z] Copying: 753/1024 [MB] (27 MBps) [2024-11-26T18:32:50.181Z] Copying: 781/1024 [MB] (27 MBps) [2024-11-26T18:32:51.118Z] Copying: 808/1024 [MB] (27 MBps) [2024-11-26T18:32:52.053Z] Copying: 836/1024 [MB] (27 MBps) [2024-11-26T18:32:52.988Z] Copying: 865/1024 [MB] (28 MBps) [2024-11-26T18:32:53.924Z] Copying: 893/1024 [MB] (28 MBps) [2024-11-26T18:32:55.300Z] Copying: 921/1024 [MB] (27 MBps) [2024-11-26T18:32:56.236Z] Copying: 950/1024 [MB] (29 MBps) [2024-11-26T18:32:57.172Z] Copying: 979/1024 [MB] (29 MBps) [2024-11-26T18:32:57.741Z] Copying: 1008/1024 [MB] (28 MBps) [2024-11-26T18:32:57.741Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-26 18:32:57.590032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.280 [2024-11-26 18:32:57.590108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:23.280 [2024-11-26 18:32:57.590129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:23.280 [2024-11-26 18:32:57.590142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.280 [2024-11-26 18:32:57.590172] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:23.280 [2024-11-26 18:32:57.593800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.280 [2024-11-26 18:32:57.593835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:23.280 [2024-11-26 18:32:57.593850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.606 ms 00:31:23.280 [2024-11-26 18:32:57.593861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.280 [2024-11-26 18:32:57.594149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.280 [2024-11-26 18:32:57.594179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:23.280 [2024-11-26 18:32:57.594194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:31:23.280 [2024-11-26 18:32:57.594205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.280 [2024-11-26 18:32:57.607132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.280 [2024-11-26 18:32:57.607204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:23.280 [2024-11-26 18:32:57.607238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.902 ms 00:31:23.280 [2024-11-26 18:32:57.607265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.280 [2024-11-26 18:32:57.613080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.280 [2024-11-26 18:32:57.613136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:23.280 [2024-11-26 18:32:57.613174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.762 ms 00:31:23.280 [2024-11-26 18:32:57.613185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.280 [2024-11-26 18:32:57.639584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.280 [2024-11-26 18:32:57.639666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:23.280 [2024-11-26 18:32:57.639698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.336 ms 00:31:23.280 [2024-11-26 18:32:57.639708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.281 [2024-11-26 18:32:57.654980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.281 [2024-11-26 18:32:57.655038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:23.281 [2024-11-26 18:32:57.655084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.231 ms 00:31:23.281 [2024-11-26 18:32:57.655095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.281 [2024-11-26 18:32:57.657115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.281 [2024-11-26 18:32:57.657170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:23.281 [2024-11-26 18:32:57.657201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.992 ms 00:31:23.281 [2024-11-26 18:32:57.657235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.281 [2024-11-26 18:32:57.684242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.281 [2024-11-26 18:32:57.684288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:23.281 [2024-11-26 18:32:57.684320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.985 ms 00:31:23.281 [2024-11-26 18:32:57.684331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.281 [2024-11-26 18:32:57.710049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.281 [2024-11-26 18:32:57.710104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:23.281 [2024-11-26 18:32:57.710134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.675 ms 00:31:23.281 [2024-11-26 18:32:57.710144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.281 [2024-11-26 18:32:57.735481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.281 [2024-11-26 18:32:57.735544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:23.281 [2024-11-26 18:32:57.735583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.298 ms 00:31:23.281 [2024-11-26 18:32:57.735594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.541 [2024-11-26 18:32:57.761359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.541 [2024-11-26 18:32:57.761415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:23.541 [2024-11-26 18:32:57.761445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.698 ms 00:31:23.541 [2024-11-26 18:32:57.761455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.541 [2024-11-26 18:32:57.761495] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:23.541 [2024-11-26 18:32:57.761526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:23.541 [2024-11-26 18:32:57.761542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:31:23.541 [2024-11-26 18:32:57.761571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.761989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:23.541 [2024-11-26 18:32:57.762436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:23.542 [2024-11-26 18:32:57.762932] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:23.542 [2024-11-26 18:32:57.762945] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 77389958-60a0-4bcf-8661-63f2fd98ce2c 00:31:23.542 [2024-11-26 18:32:57.762957] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:31:23.542 [2024-11-26 18:32:57.762968] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 150208 00:31:23.542 [2024-11-26 18:32:57.762987] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 148224 00:31:23.542 [2024-11-26 18:32:57.763000] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0134 00:31:23.542 [2024-11-26 18:32:57.763011] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:23.542 [2024-11-26 18:32:57.763034] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:23.542 [2024-11-26 18:32:57.763046] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:23.542 [2024-11-26 18:32:57.763056] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:23.542 [2024-11-26 18:32:57.763067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:23.542 [2024-11-26 18:32:57.763079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.542 [2024-11-26 18:32:57.763091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:23.542 [2024-11-26 18:32:57.763104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.585 ms 00:31:23.542 [2024-11-26 18:32:57.763115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.542 [2024-11-26 18:32:57.779145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.542 [2024-11-26 18:32:57.779196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:23.542 [2024-11-26 18:32:57.779227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.987 ms 00:31:23.542 [2024-11-26 18:32:57.779238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.542 [2024-11-26 18:32:57.779762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:23.542 [2024-11-26 18:32:57.779792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:23.542 [2024-11-26 18:32:57.779819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.500 ms 00:31:23.542 [2024-11-26 18:32:57.779845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.542 [2024-11-26 18:32:57.817952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.542 [2024-11-26 18:32:57.818022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:23.542 [2024-11-26 18:32:57.818052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.542 [2024-11-26 18:32:57.818063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.542 [2024-11-26 18:32:57.818130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.542 [2024-11-26 18:32:57.818146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:23.542 [2024-11-26 18:32:57.818157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.542 [2024-11-26 18:32:57.818168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.542 [2024-11-26 18:32:57.818324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.542 [2024-11-26 18:32:57.818345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:23.542 [2024-11-26 18:32:57.818357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.542 [2024-11-26 18:32:57.818369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.542 [2024-11-26 18:32:57.818392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.542 [2024-11-26 18:32:57.818406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:23.542 [2024-11-26 18:32:57.818418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.542 [2024-11-26 18:32:57.818428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.542 [2024-11-26 18:32:57.904914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.542 [2024-11-26 18:32:57.904976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:23.542 [2024-11-26 18:32:57.905008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.542 [2024-11-26 18:32:57.905019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.542 [2024-11-26 18:32:57.979764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.542 [2024-11-26 18:32:57.979830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:23.542 [2024-11-26 18:32:57.979863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.542 [2024-11-26 18:32:57.979874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.542 [2024-11-26 18:32:57.979968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.542 [2024-11-26 18:32:57.980007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:23.542 [2024-11-26 18:32:57.980019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.542 [2024-11-26 18:32:57.980029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.542 [2024-11-26 18:32:57.980140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.542 [2024-11-26 18:32:57.980157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:23.542 [2024-11-26 18:32:57.980169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.542 [2024-11-26 18:32:57.980180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.542 [2024-11-26 18:32:57.980298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.542 [2024-11-26 18:32:57.980326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:23.542 [2024-11-26 18:32:57.980346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.542 [2024-11-26 18:32:57.980356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.542 [2024-11-26 18:32:57.980414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.542 [2024-11-26 18:32:57.980432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:23.542 [2024-11-26 18:32:57.980443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.542 [2024-11-26 18:32:57.980454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.542 [2024-11-26 18:32:57.980501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.542 [2024-11-26 18:32:57.980521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:23.542 [2024-11-26 18:32:57.980541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.542 [2024-11-26 18:32:57.980564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.542 [2024-11-26 18:32:57.980619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:23.543 [2024-11-26 18:32:57.980635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:23.543 [2024-11-26 18:32:57.980661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:23.543 [2024-11-26 18:32:57.980675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:23.543 [2024-11-26 18:32:57.980848] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 390.796 ms, result 0 00:31:24.478 00:31:24.478 00:31:24.478 18:32:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:26.396 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:26.396 18:33:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:26.396 [2024-11-26 18:33:00.694533] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:31:26.396 [2024-11-26 18:33:00.694772] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83378 ] 00:31:26.654 [2024-11-26 18:33:00.886479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.654 [2024-11-26 18:33:01.024020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.912 [2024-11-26 18:33:01.337749] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:26.912 [2024-11-26 18:33:01.337864] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:27.172 [2024-11-26 18:33:01.498533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.172 [2024-11-26 18:33:01.498611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:27.172 [2024-11-26 18:33:01.498646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:27.172 [2024-11-26 18:33:01.498656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.172 [2024-11-26 18:33:01.498734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.172 [2024-11-26 18:33:01.498771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:27.172 [2024-11-26 18:33:01.498782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:31:27.172 [2024-11-26 18:33:01.498792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.172 [2024-11-26 18:33:01.498820] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:27.172 [2024-11-26 18:33:01.499752] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:27.172 [2024-11-26 18:33:01.499813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.172 [2024-11-26 18:33:01.499825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:27.172 [2024-11-26 18:33:01.499836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:31:27.172 [2024-11-26 18:33:01.499846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.172 [2024-11-26 18:33:01.501900] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:27.172 [2024-11-26 18:33:01.517299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.172 [2024-11-26 18:33:01.517357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:27.172 [2024-11-26 18:33:01.517388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.400 ms 00:31:27.172 [2024-11-26 18:33:01.517398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.172 [2024-11-26 18:33:01.517469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.172 [2024-11-26 18:33:01.517487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:27.172 [2024-11-26 18:33:01.517499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:31:27.172 [2024-11-26 18:33:01.517509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.172 [2024-11-26 18:33:01.526609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.172 [2024-11-26 18:33:01.526690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:27.172 [2024-11-26 18:33:01.526772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.953 ms 00:31:27.172 [2024-11-26 18:33:01.526796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.172 [2024-11-26 18:33:01.526900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.172 [2024-11-26 18:33:01.526919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:27.172 [2024-11-26 18:33:01.526933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:31:27.172 [2024-11-26 18:33:01.526955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.172 [2024-11-26 18:33:01.527021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.172 [2024-11-26 18:33:01.527041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:27.172 [2024-11-26 18:33:01.527054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:27.172 [2024-11-26 18:33:01.527065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.172 [2024-11-26 18:33:01.527107] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:27.172 [2024-11-26 18:33:01.532121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.172 [2024-11-26 18:33:01.532175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:27.172 [2024-11-26 18:33:01.532211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.023 ms 00:31:27.172 [2024-11-26 18:33:01.532223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.172 [2024-11-26 18:33:01.532261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.172 [2024-11-26 18:33:01.532278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:27.172 [2024-11-26 18:33:01.532291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:27.172 [2024-11-26 18:33:01.532303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.172 [2024-11-26 18:33:01.532376] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:27.172 [2024-11-26 18:33:01.532427] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:27.172 [2024-11-26 18:33:01.532472] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:27.172 [2024-11-26 18:33:01.532500] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:27.172 [2024-11-26 18:33:01.532628] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:27.172 [2024-11-26 18:33:01.532647] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:27.172 [2024-11-26 18:33:01.532663] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:27.172 [2024-11-26 18:33:01.532679] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:27.172 [2024-11-26 18:33:01.532693] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:27.172 [2024-11-26 18:33:01.532706] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:27.172 [2024-11-26 18:33:01.532717] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:27.172 [2024-11-26 18:33:01.532734] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:27.172 [2024-11-26 18:33:01.532745] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:27.172 [2024-11-26 18:33:01.532759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.172 [2024-11-26 18:33:01.532771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:27.172 [2024-11-26 18:33:01.532783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:31:27.172 [2024-11-26 18:33:01.532794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.172 [2024-11-26 18:33:01.532892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.172 [2024-11-26 18:33:01.532908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:27.172 [2024-11-26 18:33:01.532921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:27.172 [2024-11-26 18:33:01.532932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.172 [2024-11-26 18:33:01.533057] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:27.172 [2024-11-26 18:33:01.533077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:27.172 [2024-11-26 18:33:01.533090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:27.172 [2024-11-26 18:33:01.533102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.172 [2024-11-26 18:33:01.533114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:27.172 [2024-11-26 18:33:01.533124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:27.172 [2024-11-26 18:33:01.533135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:27.172 [2024-11-26 18:33:01.533146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:27.172 [2024-11-26 18:33:01.533156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:27.172 [2024-11-26 18:33:01.533166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:27.172 [2024-11-26 18:33:01.533177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:27.172 [2024-11-26 18:33:01.533187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:27.172 [2024-11-26 18:33:01.533197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:27.172 [2024-11-26 18:33:01.533223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:27.172 [2024-11-26 18:33:01.533234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:27.172 [2024-11-26 18:33:01.533244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.172 [2024-11-26 18:33:01.533257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:27.172 [2024-11-26 18:33:01.533268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:27.172 [2024-11-26 18:33:01.533279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.172 [2024-11-26 18:33:01.533290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:27.172 [2024-11-26 18:33:01.533301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:27.172 [2024-11-26 18:33:01.533311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:27.172 [2024-11-26 18:33:01.533322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:27.172 [2024-11-26 18:33:01.533332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:27.172 [2024-11-26 18:33:01.533342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:27.172 [2024-11-26 18:33:01.533353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:27.172 [2024-11-26 18:33:01.533363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:27.172 [2024-11-26 18:33:01.533374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:27.172 [2024-11-26 18:33:01.533384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:27.172 [2024-11-26 18:33:01.533394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:27.172 [2024-11-26 18:33:01.533404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:27.172 [2024-11-26 18:33:01.533414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:27.172 [2024-11-26 18:33:01.533425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:27.172 [2024-11-26 18:33:01.533435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:27.172 [2024-11-26 18:33:01.533446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:27.172 [2024-11-26 18:33:01.533461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:27.172 [2024-11-26 18:33:01.533471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:27.172 [2024-11-26 18:33:01.533481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:27.172 [2024-11-26 18:33:01.533493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:27.172 [2024-11-26 18:33:01.533504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.172 [2024-11-26 18:33:01.533515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:27.172 [2024-11-26 18:33:01.533525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:27.172 [2024-11-26 18:33:01.533535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.172 [2024-11-26 18:33:01.533544] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:27.172 [2024-11-26 18:33:01.533572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:27.172 [2024-11-26 18:33:01.533587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:27.172 [2024-11-26 18:33:01.533598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:27.172 [2024-11-26 18:33:01.533609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:27.172 [2024-11-26 18:33:01.533622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:27.172 [2024-11-26 18:33:01.533633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:27.172 [2024-11-26 18:33:01.533645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:27.172 [2024-11-26 18:33:01.533655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:27.172 [2024-11-26 18:33:01.533666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:27.172 [2024-11-26 18:33:01.533678] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:27.172 [2024-11-26 18:33:01.533693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:27.172 [2024-11-26 18:33:01.533712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:27.172 [2024-11-26 18:33:01.533724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:27.172 [2024-11-26 18:33:01.533735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:27.172 [2024-11-26 18:33:01.533746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:27.172 [2024-11-26 18:33:01.533757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:27.172 [2024-11-26 18:33:01.533768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:27.172 [2024-11-26 18:33:01.533778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:27.172 [2024-11-26 18:33:01.533789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:27.172 [2024-11-26 18:33:01.533800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:27.172 [2024-11-26 18:33:01.533811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:27.172 [2024-11-26 18:33:01.533821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:27.172 [2024-11-26 18:33:01.533833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:27.172 [2024-11-26 18:33:01.533844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:27.172 [2024-11-26 18:33:01.533855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:27.172 [2024-11-26 18:33:01.533866] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:27.172 [2024-11-26 18:33:01.533879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:27.172 [2024-11-26 18:33:01.533892] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:27.172 [2024-11-26 18:33:01.533903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:27.172 [2024-11-26 18:33:01.533915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:27.172 [2024-11-26 18:33:01.533926] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:27.172 [2024-11-26 18:33:01.533938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.173 [2024-11-26 18:33:01.533950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:27.173 [2024-11-26 18:33:01.533962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:31:27.173 [2024-11-26 18:33:01.533973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.173 [2024-11-26 18:33:01.570779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.173 [2024-11-26 18:33:01.570862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:27.173 [2024-11-26 18:33:01.570897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.740 ms 00:31:27.173 [2024-11-26 18:33:01.570913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.173 [2024-11-26 18:33:01.571032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.173 [2024-11-26 18:33:01.571047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:27.173 [2024-11-26 18:33:01.571060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:31:27.173 [2024-11-26 18:33:01.571070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.636043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.636115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:27.432 [2024-11-26 18:33:01.636147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.867 ms 00:31:27.432 [2024-11-26 18:33:01.636159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.636224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.636241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:27.432 [2024-11-26 18:33:01.636259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:27.432 [2024-11-26 18:33:01.636269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.636995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.637056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:27.432 [2024-11-26 18:33:01.637070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:31:27.432 [2024-11-26 18:33:01.637081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.637276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.637296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:27.432 [2024-11-26 18:33:01.637316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:31:27.432 [2024-11-26 18:33:01.637327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.655433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.655492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:27.432 [2024-11-26 18:33:01.655523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.079 ms 00:31:27.432 [2024-11-26 18:33:01.655533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.670241] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:27.432 [2024-11-26 18:33:01.670300] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:27.432 [2024-11-26 18:33:01.670332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.670344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:27.432 [2024-11-26 18:33:01.670356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.647 ms 00:31:27.432 [2024-11-26 18:33:01.670366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.695944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.696019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:27.432 [2024-11-26 18:33:01.696051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.536 ms 00:31:27.432 [2024-11-26 18:33:01.696062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.709958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.710024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:27.432 [2024-11-26 18:33:01.710054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.816 ms 00:31:27.432 [2024-11-26 18:33:01.710064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.723209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.723264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:27.432 [2024-11-26 18:33:01.723294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.103 ms 00:31:27.432 [2024-11-26 18:33:01.723304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.724122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.724169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:27.432 [2024-11-26 18:33:01.724202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:31:27.432 [2024-11-26 18:33:01.724212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.793347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.793435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:27.432 [2024-11-26 18:33:01.793479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.111 ms 00:31:27.432 [2024-11-26 18:33:01.793507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.804199] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:27.432 [2024-11-26 18:33:01.806841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.806893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:27.432 [2024-11-26 18:33:01.806925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.225 ms 00:31:27.432 [2024-11-26 18:33:01.806935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.807042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.807077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:27.432 [2024-11-26 18:33:01.807093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:27.432 [2024-11-26 18:33:01.807104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.808240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.808287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:27.432 [2024-11-26 18:33:01.808317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.031 ms 00:31:27.432 [2024-11-26 18:33:01.808328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.808377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.808392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:27.432 [2024-11-26 18:33:01.808404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:27.432 [2024-11-26 18:33:01.808414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.808462] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:27.432 [2024-11-26 18:33:01.808477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.808488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:27.432 [2024-11-26 18:33:01.808514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:27.432 [2024-11-26 18:33:01.808541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.835549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.835616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:27.432 [2024-11-26 18:33:01.835653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.950 ms 00:31:27.432 [2024-11-26 18:33:01.835664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.835751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:27.432 [2024-11-26 18:33:01.835768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:27.432 [2024-11-26 18:33:01.835780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:31:27.432 [2024-11-26 18:33:01.835790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:27.432 [2024-11-26 18:33:01.837561] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 338.402 ms, result 0 00:31:28.809  [2024-11-26T18:33:04.206Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-26T18:33:05.143Z] Copying: 44/1024 [MB] (22 MBps) [2024-11-26T18:33:06.080Z] Copying: 66/1024 [MB] (22 MBps) [2024-11-26T18:33:07.015Z] Copying: 88/1024 [MB] (21 MBps) [2024-11-26T18:33:08.392Z] Copying: 110/1024 [MB] (22 MBps) [2024-11-26T18:33:09.328Z] Copying: 133/1024 [MB] (22 MBps) [2024-11-26T18:33:10.262Z] Copying: 155/1024 [MB] (22 MBps) [2024-11-26T18:33:11.197Z] Copying: 177/1024 [MB] (22 MBps) [2024-11-26T18:33:12.131Z] Copying: 199/1024 [MB] (22 MBps) [2024-11-26T18:33:13.065Z] Copying: 222/1024 [MB] (22 MBps) [2024-11-26T18:33:14.438Z] Copying: 244/1024 [MB] (22 MBps) [2024-11-26T18:33:15.373Z] Copying: 267/1024 [MB] (22 MBps) [2024-11-26T18:33:16.307Z] Copying: 290/1024 [MB] (22 MBps) [2024-11-26T18:33:17.244Z] Copying: 312/1024 [MB] (22 MBps) [2024-11-26T18:33:18.181Z] Copying: 335/1024 [MB] (22 MBps) [2024-11-26T18:33:19.118Z] Copying: 358/1024 [MB] (22 MBps) [2024-11-26T18:33:20.051Z] Copying: 381/1024 [MB] (22 MBps) [2024-11-26T18:33:21.428Z] Copying: 404/1024 [MB] (23 MBps) [2024-11-26T18:33:22.364Z] Copying: 427/1024 [MB] (22 MBps) [2024-11-26T18:33:23.301Z] Copying: 449/1024 [MB] (22 MBps) [2024-11-26T18:33:24.237Z] Copying: 471/1024 [MB] (22 MBps) [2024-11-26T18:33:25.173Z] Copying: 494/1024 [MB] (22 MBps) [2024-11-26T18:33:26.109Z] Copying: 516/1024 [MB] (21 MBps) [2024-11-26T18:33:27.045Z] Copying: 538/1024 [MB] (22 MBps) [2024-11-26T18:33:28.420Z] Copying: 559/1024 [MB] (21 MBps) [2024-11-26T18:33:29.360Z] Copying: 581/1024 [MB] (21 MBps) [2024-11-26T18:33:30.296Z] Copying: 603/1024 [MB] (22 MBps) [2024-11-26T18:33:31.250Z] Copying: 626/1024 [MB] (22 MBps) [2024-11-26T18:33:32.201Z] Copying: 647/1024 [MB] (21 MBps) [2024-11-26T18:33:33.138Z] Copying: 669/1024 [MB] (21 MBps) [2024-11-26T18:33:34.075Z] Copying: 690/1024 [MB] (21 MBps) [2024-11-26T18:33:35.452Z] Copying: 712/1024 [MB] (21 MBps) [2024-11-26T18:33:36.019Z] Copying: 738/1024 [MB] (25 MBps) [2024-11-26T18:33:37.397Z] Copying: 761/1024 [MB] (23 MBps) [2024-11-26T18:33:38.333Z] Copying: 783/1024 [MB] (22 MBps) [2024-11-26T18:33:39.271Z] Copying: 805/1024 [MB] (21 MBps) [2024-11-26T18:33:40.210Z] Copying: 828/1024 [MB] (22 MBps) [2024-11-26T18:33:41.147Z] Copying: 851/1024 [MB] (22 MBps) [2024-11-26T18:33:42.085Z] Copying: 873/1024 [MB] (22 MBps) [2024-11-26T18:33:43.021Z] Copying: 895/1024 [MB] (22 MBps) [2024-11-26T18:33:44.398Z] Copying: 917/1024 [MB] (22 MBps) [2024-11-26T18:33:45.333Z] Copying: 939/1024 [MB] (21 MBps) [2024-11-26T18:33:46.271Z] Copying: 960/1024 [MB] (21 MBps) [2024-11-26T18:33:47.207Z] Copying: 982/1024 [MB] (21 MBps) [2024-11-26T18:33:48.144Z] Copying: 1004/1024 [MB] (21 MBps) [2024-11-26T18:33:48.144Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-26 18:33:48.105878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.683 [2024-11-26 18:33:48.105978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:13.683 [2024-11-26 18:33:48.106029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:13.683 [2024-11-26 18:33:48.106048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.683 [2024-11-26 18:33:48.106082] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:13.683 [2024-11-26 18:33:48.109945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.683 [2024-11-26 18:33:48.109989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:13.683 [2024-11-26 18:33:48.110020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.827 ms 00:32:13.683 [2024-11-26 18:33:48.110030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.683 [2024-11-26 18:33:48.110334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.683 [2024-11-26 18:33:48.110355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:13.683 [2024-11-26 18:33:48.110368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:32:13.683 [2024-11-26 18:33:48.110380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.683 [2024-11-26 18:33:48.113327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.683 [2024-11-26 18:33:48.113355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:13.683 [2024-11-26 18:33:48.113383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.928 ms 00:32:13.683 [2024-11-26 18:33:48.113399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.683 [2024-11-26 18:33:48.118932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.683 [2024-11-26 18:33:48.118967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:13.683 [2024-11-26 18:33:48.119003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.514 ms 00:32:13.683 [2024-11-26 18:33:48.119013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.944 [2024-11-26 18:33:48.144870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.944 [2024-11-26 18:33:48.144912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:13.944 [2024-11-26 18:33:48.144942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.792 ms 00:32:13.944 [2024-11-26 18:33:48.144952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.944 [2024-11-26 18:33:48.160332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.944 [2024-11-26 18:33:48.160373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:13.944 [2024-11-26 18:33:48.160403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.341 ms 00:32:13.944 [2024-11-26 18:33:48.160414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.944 [2024-11-26 18:33:48.162530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.944 [2024-11-26 18:33:48.162594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:13.944 [2024-11-26 18:33:48.162627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.064 ms 00:32:13.944 [2024-11-26 18:33:48.162638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.944 [2024-11-26 18:33:48.188669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.944 [2024-11-26 18:33:48.188727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:32:13.944 [2024-11-26 18:33:48.188757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.010 ms 00:32:13.944 [2024-11-26 18:33:48.188767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.944 [2024-11-26 18:33:48.217259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.944 [2024-11-26 18:33:48.217295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:32:13.944 [2024-11-26 18:33:48.217324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.451 ms 00:32:13.944 [2024-11-26 18:33:48.217334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.944 [2024-11-26 18:33:48.243651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.944 [2024-11-26 18:33:48.243691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:13.944 [2024-11-26 18:33:48.243720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.279 ms 00:32:13.944 [2024-11-26 18:33:48.243730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.944 [2024-11-26 18:33:48.268806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.944 [2024-11-26 18:33:48.268846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:13.944 [2024-11-26 18:33:48.268876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.011 ms 00:32:13.944 [2024-11-26 18:33:48.268885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.944 [2024-11-26 18:33:48.268923] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:13.944 [2024-11-26 18:33:48.268951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:13.944 [2024-11-26 18:33:48.268967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:32:13.944 [2024-11-26 18:33:48.268978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.268988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.268998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:13.944 [2024-11-26 18:33:48.269529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.269994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.270005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.270015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.270026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.270037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.270049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.270060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.270071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.270082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:13.945 [2024-11-26 18:33:48.270100] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:13.945 [2024-11-26 18:33:48.270111] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 77389958-60a0-4bcf-8661-63f2fd98ce2c 00:32:13.945 [2024-11-26 18:33:48.270122] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:32:13.945 [2024-11-26 18:33:48.270133] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:13.945 [2024-11-26 18:33:48.270143] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:13.945 [2024-11-26 18:33:48.270154] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:13.945 [2024-11-26 18:33:48.270176] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:13.945 [2024-11-26 18:33:48.270187] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:13.945 [2024-11-26 18:33:48.270197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:13.945 [2024-11-26 18:33:48.270206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:13.945 [2024-11-26 18:33:48.270216] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:13.945 [2024-11-26 18:33:48.270226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.945 [2024-11-26 18:33:48.270236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:13.945 [2024-11-26 18:33:48.270247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.304 ms 00:32:13.945 [2024-11-26 18:33:48.270262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.945 [2024-11-26 18:33:48.284909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.945 [2024-11-26 18:33:48.284962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:13.945 [2024-11-26 18:33:48.284992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.624 ms 00:32:13.945 [2024-11-26 18:33:48.285003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.945 [2024-11-26 18:33:48.285483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.945 [2024-11-26 18:33:48.285521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:13.945 [2024-11-26 18:33:48.285535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:32:13.945 [2024-11-26 18:33:48.285546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.945 [2024-11-26 18:33:48.323134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.945 [2024-11-26 18:33:48.323180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:13.945 [2024-11-26 18:33:48.323212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.945 [2024-11-26 18:33:48.323222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.945 [2024-11-26 18:33:48.323286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.945 [2024-11-26 18:33:48.323307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:13.945 [2024-11-26 18:33:48.323318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.945 [2024-11-26 18:33:48.323327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.945 [2024-11-26 18:33:48.323413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.945 [2024-11-26 18:33:48.323463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:13.945 [2024-11-26 18:33:48.323489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.945 [2024-11-26 18:33:48.323499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.945 [2024-11-26 18:33:48.323522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.945 [2024-11-26 18:33:48.323536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:13.945 [2024-11-26 18:33:48.323554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.945 [2024-11-26 18:33:48.323564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.205 [2024-11-26 18:33:48.408787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.205 [2024-11-26 18:33:48.408856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:14.205 [2024-11-26 18:33:48.408889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.205 [2024-11-26 18:33:48.408900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.205 [2024-11-26 18:33:48.478149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.205 [2024-11-26 18:33:48.478210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:14.205 [2024-11-26 18:33:48.478242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.205 [2024-11-26 18:33:48.478253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.205 [2024-11-26 18:33:48.478326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.205 [2024-11-26 18:33:48.478343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:14.205 [2024-11-26 18:33:48.478354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.205 [2024-11-26 18:33:48.478364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.205 [2024-11-26 18:33:48.478431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.205 [2024-11-26 18:33:48.478446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:14.205 [2024-11-26 18:33:48.478458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.205 [2024-11-26 18:33:48.478474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.205 [2024-11-26 18:33:48.478683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.205 [2024-11-26 18:33:48.478704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:14.205 [2024-11-26 18:33:48.478717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.205 [2024-11-26 18:33:48.478728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.205 [2024-11-26 18:33:48.478796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.205 [2024-11-26 18:33:48.478815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:14.205 [2024-11-26 18:33:48.478828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.205 [2024-11-26 18:33:48.478838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.205 [2024-11-26 18:33:48.478893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.205 [2024-11-26 18:33:48.478909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:14.205 [2024-11-26 18:33:48.478921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.205 [2024-11-26 18:33:48.478932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.205 [2024-11-26 18:33:48.478987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:14.205 [2024-11-26 18:33:48.479004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:14.205 [2024-11-26 18:33:48.479016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:14.205 [2024-11-26 18:33:48.479033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:14.205 [2024-11-26 18:33:48.479199] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 373.296 ms, result 0 00:32:15.140 00:32:15.140 00:32:15.140 18:33:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:17.044 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:32:17.044 18:33:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:32:17.044 18:33:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:32:17.044 18:33:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:17.044 18:33:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:17.044 18:33:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:32:17.044 18:33:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:17.044 18:33:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:32:17.044 18:33:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81488 00:32:17.044 18:33:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81488 ']' 00:32:17.044 18:33:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81488 00:32:17.044 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81488) - No such process 00:32:17.044 Process with pid 81488 is not found 00:32:17.044 18:33:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81488 is not found' 00:32:17.044 18:33:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:32:17.303 18:33:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:32:17.303 Remove shared memory files 00:32:17.303 18:33:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:17.303 18:33:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:17.303 18:33:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:17.303 18:33:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:32:17.303 18:33:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:17.303 18:33:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:17.303 ************************************ 00:32:17.303 END TEST ftl_dirty_shutdown 00:32:17.303 ************************************ 00:32:17.303 00:32:17.303 real 4m0.281s 00:32:17.303 user 4m37.669s 00:32:17.303 sys 0m36.237s 00:32:17.303 18:33:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:17.303 18:33:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:17.303 18:33:51 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:17.303 18:33:51 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:17.303 18:33:51 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:17.303 18:33:51 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:17.303 ************************************ 00:32:17.303 START TEST ftl_upgrade_shutdown 00:32:17.303 ************************************ 00:32:17.303 18:33:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:32:17.563 * Looking for test storage... 00:32:17.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:32:17.563 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:17.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.564 --rc genhtml_branch_coverage=1 00:32:17.564 --rc genhtml_function_coverage=1 00:32:17.564 --rc genhtml_legend=1 00:32:17.564 --rc geninfo_all_blocks=1 00:32:17.564 --rc geninfo_unexecuted_blocks=1 00:32:17.564 00:32:17.564 ' 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:17.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.564 --rc genhtml_branch_coverage=1 00:32:17.564 --rc genhtml_function_coverage=1 00:32:17.564 --rc genhtml_legend=1 00:32:17.564 --rc geninfo_all_blocks=1 00:32:17.564 --rc geninfo_unexecuted_blocks=1 00:32:17.564 00:32:17.564 ' 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:17.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.564 --rc genhtml_branch_coverage=1 00:32:17.564 --rc genhtml_function_coverage=1 00:32:17.564 --rc genhtml_legend=1 00:32:17.564 --rc geninfo_all_blocks=1 00:32:17.564 --rc geninfo_unexecuted_blocks=1 00:32:17.564 00:32:17.564 ' 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:17.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:17.564 --rc genhtml_branch_coverage=1 00:32:17.564 --rc genhtml_function_coverage=1 00:32:17.564 --rc genhtml_legend=1 00:32:17.564 --rc geninfo_all_blocks=1 00:32:17.564 --rc geninfo_unexecuted_blocks=1 00:32:17.564 00:32:17.564 ' 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83935 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83935 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83935 ']' 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:17.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.564 18:33:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.565 18:33:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:17.565 18:33:51 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:17.824 [2024-11-26 18:33:52.058931] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:32:17.824 [2024-11-26 18:33:52.059138] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83935 ] 00:32:17.824 [2024-11-26 18:33:52.232679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.083 [2024-11-26 18:33:52.341016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:32:18.652 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:32:19.219 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:32:19.219 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:32:19.219 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:32:19.219 18:33:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:32:19.219 18:33:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:19.219 18:33:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:19.219 18:33:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:19.219 18:33:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:32:19.219 18:33:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:19.219 { 00:32:19.219 "name": "basen1", 00:32:19.219 "aliases": [ 00:32:19.219 "6439244a-9cdc-46f3-82f7-29446592bc56" 00:32:19.219 ], 00:32:19.219 "product_name": "NVMe disk", 00:32:19.219 "block_size": 4096, 00:32:19.219 "num_blocks": 1310720, 00:32:19.219 "uuid": "6439244a-9cdc-46f3-82f7-29446592bc56", 00:32:19.219 "numa_id": -1, 00:32:19.219 "assigned_rate_limits": { 00:32:19.219 "rw_ios_per_sec": 0, 00:32:19.219 "rw_mbytes_per_sec": 0, 00:32:19.219 "r_mbytes_per_sec": 0, 00:32:19.219 "w_mbytes_per_sec": 0 00:32:19.219 }, 00:32:19.219 "claimed": true, 00:32:19.219 "claim_type": "read_many_write_one", 00:32:19.219 "zoned": false, 00:32:19.219 "supported_io_types": { 00:32:19.219 "read": true, 00:32:19.219 "write": true, 00:32:19.219 "unmap": true, 00:32:19.219 "flush": true, 00:32:19.219 "reset": true, 00:32:19.219 "nvme_admin": true, 00:32:19.219 "nvme_io": true, 00:32:19.219 "nvme_io_md": false, 00:32:19.219 "write_zeroes": true, 00:32:19.219 "zcopy": false, 00:32:19.219 "get_zone_info": false, 00:32:19.219 "zone_management": false, 00:32:19.219 "zone_append": false, 00:32:19.219 "compare": true, 00:32:19.219 "compare_and_write": false, 00:32:19.219 "abort": true, 00:32:19.219 "seek_hole": false, 00:32:19.219 "seek_data": false, 00:32:19.219 "copy": true, 00:32:19.219 "nvme_iov_md": false 00:32:19.219 }, 00:32:19.219 "driver_specific": { 00:32:19.219 "nvme": [ 00:32:19.219 { 00:32:19.219 "pci_address": "0000:00:11.0", 00:32:19.219 "trid": { 00:32:19.219 "trtype": "PCIe", 00:32:19.219 "traddr": "0000:00:11.0" 00:32:19.219 }, 00:32:19.219 "ctrlr_data": { 00:32:19.219 "cntlid": 0, 00:32:19.219 "vendor_id": "0x1b36", 00:32:19.219 "model_number": "QEMU NVMe Ctrl", 00:32:19.219 "serial_number": "12341", 00:32:19.219 "firmware_revision": "8.0.0", 00:32:19.219 "subnqn": "nqn.2019-08.org.qemu:12341", 00:32:19.219 "oacs": { 00:32:19.219 "security": 0, 00:32:19.219 "format": 1, 00:32:19.219 "firmware": 0, 00:32:19.219 "ns_manage": 1 00:32:19.219 }, 00:32:19.219 "multi_ctrlr": false, 00:32:19.219 "ana_reporting": false 00:32:19.219 }, 00:32:19.219 "vs": { 00:32:19.219 "nvme_version": "1.4" 00:32:19.219 }, 00:32:19.219 "ns_data": { 00:32:19.219 "id": 1, 00:32:19.219 "can_share": false 00:32:19.219 } 00:32:19.219 } 00:32:19.219 ], 00:32:19.219 "mp_policy": "active_passive" 00:32:19.219 } 00:32:19.219 } 00:32:19.219 ]' 00:32:19.219 18:33:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:19.477 18:33:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:19.477 18:33:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:19.477 18:33:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:32:19.477 18:33:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:32:19.477 18:33:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:32:19.477 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:32:19.477 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:32:19.477 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:32:19.477 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:19.477 18:33:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:19.735 18:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=5cb6cec6-ec6e-4f33-a6a0-8c49774e855d 00:32:19.735 18:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:32:19.735 18:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5cb6cec6-ec6e-4f33-a6a0-8c49774e855d 00:32:19.994 18:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:32:20.252 18:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=549c2704-0645-4f57-b09c-58eff33870ea 00:32:20.252 18:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 549c2704-0645-4f57-b09c-58eff33870ea 00:32:20.252 18:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=3213dd29-d3cb-4b9f-92ff-1307df54a5f9 00:32:20.252 18:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 3213dd29-d3cb-4b9f-92ff-1307df54a5f9 ]] 00:32:20.252 18:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 3213dd29-d3cb-4b9f-92ff-1307df54a5f9 5120 00:32:20.252 18:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:32:20.252 18:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:32:20.252 18:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=3213dd29-d3cb-4b9f-92ff-1307df54a5f9 00:32:20.253 18:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:32:20.253 18:33:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3213dd29-d3cb-4b9f-92ff-1307df54a5f9 00:32:20.253 18:33:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=3213dd29-d3cb-4b9f-92ff-1307df54a5f9 00:32:20.253 18:33:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:32:20.253 18:33:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:32:20.253 18:33:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:32:20.253 18:33:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3213dd29-d3cb-4b9f-92ff-1307df54a5f9 00:32:20.820 18:33:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:32:20.820 { 00:32:20.820 "name": "3213dd29-d3cb-4b9f-92ff-1307df54a5f9", 00:32:20.820 "aliases": [ 00:32:20.820 "lvs/basen1p0" 00:32:20.820 ], 00:32:20.820 "product_name": "Logical Volume", 00:32:20.820 "block_size": 4096, 00:32:20.820 "num_blocks": 5242880, 00:32:20.820 "uuid": "3213dd29-d3cb-4b9f-92ff-1307df54a5f9", 00:32:20.820 "assigned_rate_limits": { 00:32:20.820 "rw_ios_per_sec": 0, 00:32:20.820 "rw_mbytes_per_sec": 0, 00:32:20.820 "r_mbytes_per_sec": 0, 00:32:20.820 "w_mbytes_per_sec": 0 00:32:20.820 }, 00:32:20.820 "claimed": false, 00:32:20.820 "zoned": false, 00:32:20.820 "supported_io_types": { 00:32:20.820 "read": true, 00:32:20.820 "write": true, 00:32:20.820 "unmap": true, 00:32:20.820 "flush": false, 00:32:20.820 "reset": true, 00:32:20.820 "nvme_admin": false, 00:32:20.820 "nvme_io": false, 00:32:20.820 "nvme_io_md": false, 00:32:20.820 "write_zeroes": true, 00:32:20.820 "zcopy": false, 00:32:20.820 "get_zone_info": false, 00:32:20.820 "zone_management": false, 00:32:20.820 "zone_append": false, 00:32:20.820 "compare": false, 00:32:20.820 "compare_and_write": false, 00:32:20.820 "abort": false, 00:32:20.820 "seek_hole": true, 00:32:20.820 "seek_data": true, 00:32:20.820 "copy": false, 00:32:20.820 "nvme_iov_md": false 00:32:20.820 }, 00:32:20.820 "driver_specific": { 00:32:20.820 "lvol": { 00:32:20.820 "lvol_store_uuid": "549c2704-0645-4f57-b09c-58eff33870ea", 00:32:20.820 "base_bdev": "basen1", 00:32:20.820 "thin_provision": true, 00:32:20.820 "num_allocated_clusters": 0, 00:32:20.820 "snapshot": false, 00:32:20.820 "clone": false, 00:32:20.820 "esnap_clone": false 00:32:20.820 } 00:32:20.820 } 00:32:20.820 } 00:32:20.820 ]' 00:32:20.820 18:33:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:32:20.821 18:33:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:32:20.821 18:33:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:32:20.821 18:33:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:32:20.821 18:33:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:32:20.821 18:33:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:32:20.821 18:33:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:32:20.821 18:33:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:32:20.821 18:33:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:32:21.079 18:33:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:32:21.079 18:33:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:32:21.079 18:33:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:32:21.337 18:33:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:32:21.337 18:33:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:32:21.337 18:33:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 3213dd29-d3cb-4b9f-92ff-1307df54a5f9 -c cachen1p0 --l2p_dram_limit 2 00:32:21.597 [2024-11-26 18:33:55.870509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.597 [2024-11-26 18:33:55.870610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:21.597 [2024-11-26 18:33:55.870642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:21.597 [2024-11-26 18:33:55.870659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.597 [2024-11-26 18:33:55.870838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.597 [2024-11-26 18:33:55.870864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:21.597 [2024-11-26 18:33:55.870885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.139 ms 00:32:21.597 [2024-11-26 18:33:55.870901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.597 [2024-11-26 18:33:55.870945] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:21.597 [2024-11-26 18:33:55.872268] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:21.597 [2024-11-26 18:33:55.872353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.597 [2024-11-26 18:33:55.872371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:21.597 [2024-11-26 18:33:55.872390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.411 ms 00:32:21.597 [2024-11-26 18:33:55.872405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.597 [2024-11-26 18:33:55.872575] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID cddebbe7-5cfa-4147-bb3d-f8964dffa03a 00:32:21.597 [2024-11-26 18:33:55.875103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.597 [2024-11-26 18:33:55.875157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:32:21.597 [2024-11-26 18:33:55.875178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:32:21.597 [2024-11-26 18:33:55.875197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.597 [2024-11-26 18:33:55.888396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.597 [2024-11-26 18:33:55.888463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:21.597 [2024-11-26 18:33:55.888485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.089 ms 00:32:21.597 [2024-11-26 18:33:55.888503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.597 [2024-11-26 18:33:55.888602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.597 [2024-11-26 18:33:55.888630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:21.597 [2024-11-26 18:33:55.888647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:32:21.597 [2024-11-26 18:33:55.888669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.597 [2024-11-26 18:33:55.888767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.597 [2024-11-26 18:33:55.888795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:21.597 [2024-11-26 18:33:55.888823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:32:21.597 [2024-11-26 18:33:55.888843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.597 [2024-11-26 18:33:55.888886] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:21.597 [2024-11-26 18:33:55.895741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.597 [2024-11-26 18:33:55.895990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:21.597 [2024-11-26 18:33:55.896035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.861 ms 00:32:21.597 [2024-11-26 18:33:55.896054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.597 [2024-11-26 18:33:55.896110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.597 [2024-11-26 18:33:55.896129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:21.597 [2024-11-26 18:33:55.896149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:21.597 [2024-11-26 18:33:55.896163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.597 [2024-11-26 18:33:55.896226] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:32:21.597 [2024-11-26 18:33:55.896427] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:21.597 [2024-11-26 18:33:55.896459] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:21.597 [2024-11-26 18:33:55.896488] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:21.597 [2024-11-26 18:33:55.896510] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:21.597 [2024-11-26 18:33:55.896528] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:21.597 [2024-11-26 18:33:55.896547] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:21.597 [2024-11-26 18:33:55.896592] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:21.597 [2024-11-26 18:33:55.896611] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:21.597 [2024-11-26 18:33:55.896625] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:21.597 [2024-11-26 18:33:55.896644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.597 [2024-11-26 18:33:55.896659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:21.597 [2024-11-26 18:33:55.896679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.422 ms 00:32:21.597 [2024-11-26 18:33:55.896696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.597 [2024-11-26 18:33:55.896822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.597 [2024-11-26 18:33:55.896859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:21.597 [2024-11-26 18:33:55.896880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.087 ms 00:32:21.597 [2024-11-26 18:33:55.896894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.597 [2024-11-26 18:33:55.897066] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:21.597 [2024-11-26 18:33:55.897100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:21.597 [2024-11-26 18:33:55.897122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:21.597 [2024-11-26 18:33:55.897138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.597 [2024-11-26 18:33:55.897157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:21.597 [2024-11-26 18:33:55.897171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:21.597 [2024-11-26 18:33:55.897188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:21.597 [2024-11-26 18:33:55.897201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:21.597 [2024-11-26 18:33:55.897218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:21.597 [2024-11-26 18:33:55.897231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.597 [2024-11-26 18:33:55.897248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:21.597 [2024-11-26 18:33:55.897262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:21.597 [2024-11-26 18:33:55.897279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.597 [2024-11-26 18:33:55.897292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:21.597 [2024-11-26 18:33:55.897309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:21.597 [2024-11-26 18:33:55.897322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.597 [2024-11-26 18:33:55.897346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:21.597 [2024-11-26 18:33:55.897360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:21.597 [2024-11-26 18:33:55.897379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.597 [2024-11-26 18:33:55.897393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:21.598 [2024-11-26 18:33:55.897410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:21.598 [2024-11-26 18:33:55.897424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:21.598 [2024-11-26 18:33:55.897440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:21.598 [2024-11-26 18:33:55.897454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:21.598 [2024-11-26 18:33:55.897471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:21.598 [2024-11-26 18:33:55.897485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:21.598 [2024-11-26 18:33:55.897502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:21.598 [2024-11-26 18:33:55.897515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:21.598 [2024-11-26 18:33:55.897532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:21.598 [2024-11-26 18:33:55.897546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:21.598 [2024-11-26 18:33:55.897580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:21.598 [2024-11-26 18:33:55.897596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:21.598 [2024-11-26 18:33:55.897617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:21.598 [2024-11-26 18:33:55.897630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.598 [2024-11-26 18:33:55.897648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:21.598 [2024-11-26 18:33:55.897661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:21.598 [2024-11-26 18:33:55.897678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.598 [2024-11-26 18:33:55.897693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:21.598 [2024-11-26 18:33:55.897710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:21.598 [2024-11-26 18:33:55.897724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.598 [2024-11-26 18:33:55.897741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:21.598 [2024-11-26 18:33:55.897754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:21.598 [2024-11-26 18:33:55.897773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.598 [2024-11-26 18:33:55.897787] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:21.598 [2024-11-26 18:33:55.897810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:21.598 [2024-11-26 18:33:55.897825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:21.598 [2024-11-26 18:33:55.897845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:21.598 [2024-11-26 18:33:55.897860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:21.598 [2024-11-26 18:33:55.897880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:21.598 [2024-11-26 18:33:55.897894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:21.598 [2024-11-26 18:33:55.897912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:21.598 [2024-11-26 18:33:55.897928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:21.598 [2024-11-26 18:33:55.897951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:21.598 [2024-11-26 18:33:55.897972] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:21.598 [2024-11-26 18:33:55.897997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:21.598 [2024-11-26 18:33:55.898014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:21.598 [2024-11-26 18:33:55.898032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:21.598 [2024-11-26 18:33:55.898046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:21.598 [2024-11-26 18:33:55.898064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:21.598 [2024-11-26 18:33:55.898079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:21.598 [2024-11-26 18:33:55.898097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:21.598 [2024-11-26 18:33:55.898111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:21.598 [2024-11-26 18:33:55.898129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:21.598 [2024-11-26 18:33:55.898143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:21.598 [2024-11-26 18:33:55.898164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:21.598 [2024-11-26 18:33:55.898179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:21.598 [2024-11-26 18:33:55.898197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:21.598 [2024-11-26 18:33:55.898211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:21.598 [2024-11-26 18:33:55.898230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:21.598 [2024-11-26 18:33:55.898245] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:21.598 [2024-11-26 18:33:55.898265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:21.598 [2024-11-26 18:33:55.898281] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:21.598 [2024-11-26 18:33:55.898299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:21.598 [2024-11-26 18:33:55.898314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:21.598 [2024-11-26 18:33:55.898332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:21.598 [2024-11-26 18:33:55.898350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:21.598 [2024-11-26 18:33:55.898368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:21.598 [2024-11-26 18:33:55.898385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.376 ms 00:32:21.598 [2024-11-26 18:33:55.898402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:21.598 [2024-11-26 18:33:55.898474] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:32:21.598 [2024-11-26 18:33:55.898500] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:32:24.885 [2024-11-26 18:33:58.911860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.885 [2024-11-26 18:33:58.912265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:32:24.885 [2024-11-26 18:33:58.912315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3013.399 ms 00:32:24.885 [2024-11-26 18:33:58.912337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.885 [2024-11-26 18:33:58.959267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.885 [2024-11-26 18:33:58.959612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:24.885 [2024-11-26 18:33:58.959655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.554 ms 00:32:24.885 [2024-11-26 18:33:58.959677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.885 [2024-11-26 18:33:58.959820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.885 [2024-11-26 18:33:58.959851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:24.885 [2024-11-26 18:33:58.959869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:32:24.885 [2024-11-26 18:33:58.959897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.885 [2024-11-26 18:33:59.014958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.885 [2024-11-26 18:33:59.015210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:24.885 [2024-11-26 18:33:59.015247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.959 ms 00:32:24.885 [2024-11-26 18:33:59.015268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.885 [2024-11-26 18:33:59.015334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.885 [2024-11-26 18:33:59.015356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:24.885 [2024-11-26 18:33:59.015373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:32:24.885 [2024-11-26 18:33:59.015390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.885 [2024-11-26 18:33:59.016168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.885 [2024-11-26 18:33:59.016224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:24.885 [2024-11-26 18:33:59.016259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.672 ms 00:32:24.885 [2024-11-26 18:33:59.016278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.885 [2024-11-26 18:33:59.016348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.885 [2024-11-26 18:33:59.016373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:24.885 [2024-11-26 18:33:59.016389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:32:24.885 [2024-11-26 18:33:59.016408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.885 [2024-11-26 18:33:59.041773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.885 [2024-11-26 18:33:59.041997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:24.885 [2024-11-26 18:33:59.042033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.334 ms 00:32:24.885 [2024-11-26 18:33:59.042054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.885 [2024-11-26 18:33:59.070190] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:24.885 [2024-11-26 18:33:59.072025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.885 [2024-11-26 18:33:59.072072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:24.885 [2024-11-26 18:33:59.072099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.831 ms 00:32:24.885 [2024-11-26 18:33:59.072114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.885 [2024-11-26 18:33:59.105456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.885 [2024-11-26 18:33:59.105714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:32:24.885 [2024-11-26 18:33:59.105761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.289 ms 00:32:24.885 [2024-11-26 18:33:59.105780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.885 [2024-11-26 18:33:59.105941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.885 [2024-11-26 18:33:59.105966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:24.885 [2024-11-26 18:33:59.105991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.094 ms 00:32:24.885 [2024-11-26 18:33:59.106005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.885 [2024-11-26 18:33:59.144193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.885 [2024-11-26 18:33:59.144252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:32:24.885 [2024-11-26 18:33:59.144282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.098 ms 00:32:24.885 [2024-11-26 18:33:59.144298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.885 [2024-11-26 18:33:59.182700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.885 [2024-11-26 18:33:59.182759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:32:24.885 [2024-11-26 18:33:59.182802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.330 ms 00:32:24.885 [2024-11-26 18:33:59.182818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.885 [2024-11-26 18:33:59.183773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.885 [2024-11-26 18:33:59.183987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:24.885 [2024-11-26 18:33:59.184038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.890 ms 00:32:24.885 [2024-11-26 18:33:59.184056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.885 [2024-11-26 18:33:59.289468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.885 [2024-11-26 18:33:59.289522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:32:24.885 [2024-11-26 18:33:59.289548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 105.296 ms 00:32:24.885 [2024-11-26 18:33:59.289595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.885 [2024-11-26 18:33:59.316640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.885 [2024-11-26 18:33:59.316684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:32:24.885 [2024-11-26 18:33:59.316703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.895 ms 00:32:24.885 [2024-11-26 18:33:59.316713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:24.885 [2024-11-26 18:33:59.341482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:24.885 [2024-11-26 18:33:59.341523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:32:24.885 [2024-11-26 18:33:59.341542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.718 ms 00:32:24.885 [2024-11-26 18:33:59.341566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:25.143 [2024-11-26 18:33:59.366244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:25.143 [2024-11-26 18:33:59.366285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:32:25.143 [2024-11-26 18:33:59.366305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.610 ms 00:32:25.143 [2024-11-26 18:33:59.366314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:25.143 [2024-11-26 18:33:59.366366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:25.143 [2024-11-26 18:33:59.366381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:25.143 [2024-11-26 18:33:59.366398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:25.143 [2024-11-26 18:33:59.366408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:25.143 [2024-11-26 18:33:59.366505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:25.143 [2024-11-26 18:33:59.366525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:25.143 [2024-11-26 18:33:59.366538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:32:25.143 [2024-11-26 18:33:59.366549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:25.143 [2024-11-26 18:33:59.368014] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3496.946 ms, result 0 00:32:25.143 { 00:32:25.143 "name": "ftl", 00:32:25.143 "uuid": "cddebbe7-5cfa-4147-bb3d-f8964dffa03a" 00:32:25.143 } 00:32:25.143 18:33:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:32:25.402 [2024-11-26 18:33:59.642904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:25.402 18:33:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:32:25.661 18:33:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:32:25.919 [2024-11-26 18:34:00.191499] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:25.919 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:32:26.177 [2024-11-26 18:34:00.413394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:26.177 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:32:26.436 Fill FTL, iteration 1 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84059 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84059 /var/tmp/spdk.tgt.sock 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84059 ']' 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:32:26.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:26.436 18:34:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:26.695 [2024-11-26 18:34:00.959854] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:32:26.695 [2024-11-26 18:34:00.960615] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84059 ] 00:32:26.695 [2024-11-26 18:34:01.129580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.954 [2024-11-26 18:34:01.246009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:27.999 18:34:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:27.999 18:34:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:27.999 18:34:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:32:27.999 ftln1 00:32:27.999 18:34:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:32:27.999 18:34:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:32:28.261 18:34:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:32:28.261 18:34:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84059 00:32:28.261 18:34:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84059 ']' 00:32:28.261 18:34:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84059 00:32:28.261 18:34:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:28.261 18:34:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:28.261 18:34:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84059 00:32:28.261 killing process with pid 84059 00:32:28.261 18:34:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:28.261 18:34:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:28.261 18:34:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84059' 00:32:28.261 18:34:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84059 00:32:28.261 18:34:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84059 00:32:30.162 18:34:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:32:30.162 18:34:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:32:30.162 [2024-11-26 18:34:04.508210] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:32:30.162 [2024-11-26 18:34:04.508917] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84106 ] 00:32:30.421 [2024-11-26 18:34:04.691024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.421 [2024-11-26 18:34:04.788978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:31.797  [2024-11-26T18:34:07.194Z] Copying: 241/1024 [MB] (241 MBps) [2024-11-26T18:34:08.571Z] Copying: 482/1024 [MB] (241 MBps) [2024-11-26T18:34:09.508Z] Copying: 723/1024 [MB] (241 MBps) [2024-11-26T18:34:09.508Z] Copying: 968/1024 [MB] (245 MBps) [2024-11-26T18:34:10.444Z] Copying: 1024/1024 [MB] (average 241 MBps) 00:32:35.983 00:32:35.983 Calculate MD5 checksum, iteration 1 00:32:35.983 18:34:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:32:35.983 18:34:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:32:35.983 18:34:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:35.983 18:34:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:35.983 18:34:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:35.983 18:34:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:35.983 18:34:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:35.983 18:34:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:35.983 [2024-11-26 18:34:10.432132] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:32:35.983 [2024-11-26 18:34:10.432843] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84167 ] 00:32:36.241 [2024-11-26 18:34:10.601222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:36.498 [2024-11-26 18:34:10.707270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:37.871  [2024-11-26T18:34:13.263Z] Copying: 488/1024 [MB] (488 MBps) [2024-11-26T18:34:13.520Z] Copying: 945/1024 [MB] (457 MBps) [2024-11-26T18:34:14.086Z] Copying: 1024/1024 [MB] (average 470 MBps) 00:32:39.625 00:32:39.625 18:34:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:32:39.625 18:34:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:41.530 18:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:41.530 Fill FTL, iteration 2 00:32:41.530 18:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=a120c35d1af93e712eb91256f9eef129 00:32:41.530 18:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:41.530 18:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:41.530 18:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:32:41.530 18:34:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:41.530 18:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:41.530 18:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:41.530 18:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:41.530 18:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:41.530 18:34:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:32:41.530 [2024-11-26 18:34:15.962801] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:32:41.530 [2024-11-26 18:34:15.962995] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84230 ] 00:32:41.791 [2024-11-26 18:34:16.146711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.050 [2024-11-26 18:34:16.294623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.427  [2024-11-26T18:34:18.823Z] Copying: 244/1024 [MB] (244 MBps) [2024-11-26T18:34:20.200Z] Copying: 487/1024 [MB] (243 MBps) [2024-11-26T18:34:21.136Z] Copying: 738/1024 [MB] (251 MBps) [2024-11-26T18:34:21.136Z] Copying: 987/1024 [MB] (249 MBps) [2024-11-26T18:34:22.070Z] Copying: 1024/1024 [MB] (average 246 MBps) 00:32:47.609 00:32:47.609 Calculate MD5 checksum, iteration 2 00:32:47.609 18:34:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:32:47.609 18:34:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:32:47.609 18:34:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:47.609 18:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:47.609 18:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:47.609 18:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:47.609 18:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:47.609 18:34:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:47.609 [2024-11-26 18:34:22.034203] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:32:47.609 [2024-11-26 18:34:22.034355] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84305 ] 00:32:47.868 [2024-11-26 18:34:22.201501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.868 [2024-11-26 18:34:22.312766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:49.772  [2024-11-26T18:34:25.170Z] Copying: 415/1024 [MB] (415 MBps) [2024-11-26T18:34:25.736Z] Copying: 814/1024 [MB] (399 MBps) [2024-11-26T18:34:26.671Z] Copying: 1024/1024 [MB] (average 407 MBps) 00:32:52.210 00:32:52.469 18:34:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:32:52.469 18:34:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:54.376 18:34:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:32:54.376 18:34:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b9672791b0e9cd2ec74b021a236eb3d9 00:32:54.376 18:34:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:32:54.376 18:34:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:32:54.376 18:34:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:54.376 [2024-11-26 18:34:28.718938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:54.376 [2024-11-26 18:34:28.718993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:54.376 [2024-11-26 18:34:28.719014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:32:54.376 [2024-11-26 18:34:28.719026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:54.376 [2024-11-26 18:34:28.719059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:54.376 [2024-11-26 18:34:28.719080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:54.376 [2024-11-26 18:34:28.719104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:54.376 [2024-11-26 18:34:28.719144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:54.376 [2024-11-26 18:34:28.719169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:54.377 [2024-11-26 18:34:28.719182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:54.377 [2024-11-26 18:34:28.719194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:54.377 [2024-11-26 18:34:28.719204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:54.377 [2024-11-26 18:34:28.719309] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.333 ms, result 0 00:32:54.377 true 00:32:54.377 18:34:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:54.636 { 00:32:54.636 "name": "ftl", 00:32:54.636 "properties": [ 00:32:54.636 { 00:32:54.636 "name": "superblock_version", 00:32:54.636 "value": 5, 00:32:54.636 "read-only": true 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "name": "base_device", 00:32:54.636 "bands": [ 00:32:54.636 { 00:32:54.636 "id": 0, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 1, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 2, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 3, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 4, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 5, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 6, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 7, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 8, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 9, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 10, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 11, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 12, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 13, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 14, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 15, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 16, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 17, 00:32:54.636 "state": "FREE", 00:32:54.636 "validity": 0.0 00:32:54.636 } 00:32:54.636 ], 00:32:54.636 "read-only": true 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "name": "cache_device", 00:32:54.636 "type": "bdev", 00:32:54.636 "chunks": [ 00:32:54.636 { 00:32:54.636 "id": 0, 00:32:54.636 "state": "INACTIVE", 00:32:54.636 "utilization": 0.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 1, 00:32:54.636 "state": "CLOSED", 00:32:54.636 "utilization": 1.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 2, 00:32:54.636 "state": "CLOSED", 00:32:54.636 "utilization": 1.0 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 3, 00:32:54.636 "state": "OPEN", 00:32:54.636 "utilization": 0.001953125 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "id": 4, 00:32:54.636 "state": "OPEN", 00:32:54.636 "utilization": 0.0 00:32:54.636 } 00:32:54.636 ], 00:32:54.636 "read-only": true 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "name": "verbose_mode", 00:32:54.636 "value": true, 00:32:54.636 "unit": "", 00:32:54.636 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:54.636 }, 00:32:54.636 { 00:32:54.636 "name": "prep_upgrade_on_shutdown", 00:32:54.636 "value": false, 00:32:54.636 "unit": "", 00:32:54.636 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:54.636 } 00:32:54.636 ] 00:32:54.636 } 00:32:54.636 18:34:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:32:54.895 [2024-11-26 18:34:29.111254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:54.895 [2024-11-26 18:34:29.111292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:54.895 [2024-11-26 18:34:29.111306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:54.895 [2024-11-26 18:34:29.111316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:54.895 [2024-11-26 18:34:29.111343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:54.895 [2024-11-26 18:34:29.111356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:54.895 [2024-11-26 18:34:29.111367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:32:54.895 [2024-11-26 18:34:29.111376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:54.895 [2024-11-26 18:34:29.111398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:54.895 [2024-11-26 18:34:29.111410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:54.895 [2024-11-26 18:34:29.111420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:54.896 [2024-11-26 18:34:29.111429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:54.896 [2024-11-26 18:34:29.111486] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.218 ms, result 0 00:32:54.896 true 00:32:54.896 18:34:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:32:54.896 18:34:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:54.896 18:34:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:32:55.154 18:34:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:32:55.154 18:34:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:32:55.154 18:34:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:32:55.413 [2024-11-26 18:34:29.615042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:55.413 [2024-11-26 18:34:29.615105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:32:55.413 [2024-11-26 18:34:29.615151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:55.413 [2024-11-26 18:34:29.615177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:55.413 [2024-11-26 18:34:29.615208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:55.413 [2024-11-26 18:34:29.615236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:32:55.413 [2024-11-26 18:34:29.615263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:55.413 [2024-11-26 18:34:29.615273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:55.413 [2024-11-26 18:34:29.615295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:55.413 [2024-11-26 18:34:29.615307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:32:55.413 [2024-11-26 18:34:29.615333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:55.413 [2024-11-26 18:34:29.615342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:55.413 [2024-11-26 18:34:29.615434] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.345 ms, result 0 00:32:55.413 true 00:32:55.413 18:34:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:32:55.413 { 00:32:55.413 "name": "ftl", 00:32:55.413 "properties": [ 00:32:55.413 { 00:32:55.413 "name": "superblock_version", 00:32:55.413 "value": 5, 00:32:55.413 "read-only": true 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "name": "base_device", 00:32:55.413 "bands": [ 00:32:55.413 { 00:32:55.413 "id": 0, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 1, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 2, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 3, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 4, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 5, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 6, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 7, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 8, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 9, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 10, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 11, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 12, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 13, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 14, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 15, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 16, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 17, 00:32:55.413 "state": "FREE", 00:32:55.413 "validity": 0.0 00:32:55.413 } 00:32:55.413 ], 00:32:55.413 "read-only": true 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "name": "cache_device", 00:32:55.413 "type": "bdev", 00:32:55.413 "chunks": [ 00:32:55.413 { 00:32:55.413 "id": 0, 00:32:55.413 "state": "INACTIVE", 00:32:55.413 "utilization": 0.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 1, 00:32:55.413 "state": "CLOSED", 00:32:55.413 "utilization": 1.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 2, 00:32:55.413 "state": "CLOSED", 00:32:55.413 "utilization": 1.0 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 3, 00:32:55.413 "state": "OPEN", 00:32:55.413 "utilization": 0.001953125 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "id": 4, 00:32:55.413 "state": "OPEN", 00:32:55.413 "utilization": 0.0 00:32:55.413 } 00:32:55.413 ], 00:32:55.413 "read-only": true 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "name": "verbose_mode", 00:32:55.413 "value": true, 00:32:55.413 "unit": "", 00:32:55.413 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:32:55.413 }, 00:32:55.413 { 00:32:55.413 "name": "prep_upgrade_on_shutdown", 00:32:55.413 "value": true, 00:32:55.413 "unit": "", 00:32:55.413 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:32:55.413 } 00:32:55.413 ] 00:32:55.413 } 00:32:55.413 18:34:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:32:55.413 18:34:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83935 ]] 00:32:55.413 18:34:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83935 00:32:55.413 18:34:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83935 ']' 00:32:55.413 18:34:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83935 00:32:55.413 18:34:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:55.413 18:34:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:55.413 18:34:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83935 00:32:55.413 killing process with pid 83935 00:32:55.413 18:34:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:55.413 18:34:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:55.413 18:34:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83935' 00:32:55.413 18:34:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83935 00:32:55.413 18:34:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83935 00:32:56.351 [2024-11-26 18:34:30.806692] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:56.612 [2024-11-26 18:34:30.822033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:56.612 [2024-11-26 18:34:30.822094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:56.612 [2024-11-26 18:34:30.822113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:56.612 [2024-11-26 18:34:30.822124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:56.612 [2024-11-26 18:34:30.822154] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:56.612 [2024-11-26 18:34:30.826001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:56.612 [2024-11-26 18:34:30.826048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:56.612 [2024-11-26 18:34:30.826061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.828 ms 00:32:56.612 [2024-11-26 18:34:30.826078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.741 [2024-11-26 18:34:38.588955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.741 [2024-11-26 18:34:38.589043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:04.741 [2024-11-26 18:34:38.589087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7762.889 ms 00:33:04.741 [2024-11-26 18:34:38.589099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.741 [2024-11-26 18:34:38.590262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.741 [2024-11-26 18:34:38.590298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:04.741 [2024-11-26 18:34:38.590313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.141 ms 00:33:04.741 [2024-11-26 18:34:38.590324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.741 [2024-11-26 18:34:38.591464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.741 [2024-11-26 18:34:38.591493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:04.741 [2024-11-26 18:34:38.591525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.101 ms 00:33:04.741 [2024-11-26 18:34:38.591549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.742 [2024-11-26 18:34:38.602471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.742 [2024-11-26 18:34:38.602507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:04.742 [2024-11-26 18:34:38.602538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.849 ms 00:33:04.742 [2024-11-26 18:34:38.602549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.742 [2024-11-26 18:34:38.609704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.742 [2024-11-26 18:34:38.609750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:04.742 [2024-11-26 18:34:38.609781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.101 ms 00:33:04.742 [2024-11-26 18:34:38.609792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.742 [2024-11-26 18:34:38.609882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.742 [2024-11-26 18:34:38.609907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:04.742 [2024-11-26 18:34:38.609930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:33:04.742 [2024-11-26 18:34:38.609940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.742 [2024-11-26 18:34:38.620364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.742 [2024-11-26 18:34:38.620402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:04.742 [2024-11-26 18:34:38.620448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.403 ms 00:33:04.742 [2024-11-26 18:34:38.620458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.742 [2024-11-26 18:34:38.631063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.742 [2024-11-26 18:34:38.631128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:04.742 [2024-11-26 18:34:38.631163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.566 ms 00:33:04.742 [2024-11-26 18:34:38.631179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.742 [2024-11-26 18:34:38.641431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.742 [2024-11-26 18:34:38.641483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:04.742 [2024-11-26 18:34:38.641496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.215 ms 00:33:04.742 [2024-11-26 18:34:38.641506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.742 [2024-11-26 18:34:38.651759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.742 [2024-11-26 18:34:38.651805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:04.742 [2024-11-26 18:34:38.651833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.172 ms 00:33:04.742 [2024-11-26 18:34:38.651842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.742 [2024-11-26 18:34:38.651879] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:04.742 [2024-11-26 18:34:38.651913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:04.742 [2024-11-26 18:34:38.651926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:04.742 [2024-11-26 18:34:38.651937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:04.742 [2024-11-26 18:34:38.651948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:04.742 [2024-11-26 18:34:38.651958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:04.742 [2024-11-26 18:34:38.651968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:04.742 [2024-11-26 18:34:38.651977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:04.742 [2024-11-26 18:34:38.651987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:04.742 [2024-11-26 18:34:38.651997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:04.742 [2024-11-26 18:34:38.652006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:04.742 [2024-11-26 18:34:38.652024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:04.742 [2024-11-26 18:34:38.652033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:04.742 [2024-11-26 18:34:38.652043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:04.742 [2024-11-26 18:34:38.652053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:04.742 [2024-11-26 18:34:38.652062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:04.742 [2024-11-26 18:34:38.652071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:04.742 [2024-11-26 18:34:38.652096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:04.742 [2024-11-26 18:34:38.652106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:04.742 [2024-11-26 18:34:38.652119] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:04.742 [2024-11-26 18:34:38.652129] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: cddebbe7-5cfa-4147-bb3d-f8964dffa03a 00:33:04.742 [2024-11-26 18:34:38.652140] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:04.742 [2024-11-26 18:34:38.652149] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:33:04.742 [2024-11-26 18:34:38.652158] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:33:04.742 [2024-11-26 18:34:38.652168] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:33:04.742 [2024-11-26 18:34:38.652183] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:04.742 [2024-11-26 18:34:38.652203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:04.742 [2024-11-26 18:34:38.652217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:04.742 [2024-11-26 18:34:38.652226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:04.742 [2024-11-26 18:34:38.652234] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:04.742 [2024-11-26 18:34:38.652259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.742 [2024-11-26 18:34:38.652269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:04.742 [2024-11-26 18:34:38.652281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.381 ms 00:33:04.742 [2024-11-26 18:34:38.652292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.742 [2024-11-26 18:34:38.667195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.742 [2024-11-26 18:34:38.667245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:04.742 [2024-11-26 18:34:38.667292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.880 ms 00:33:04.742 [2024-11-26 18:34:38.667303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.742 [2024-11-26 18:34:38.667791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:04.742 [2024-11-26 18:34:38.667834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:04.742 [2024-11-26 18:34:38.667847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.451 ms 00:33:04.742 [2024-11-26 18:34:38.667857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.742 [2024-11-26 18:34:38.715479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.742 [2024-11-26 18:34:38.715536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:04.742 [2024-11-26 18:34:38.715575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.742 [2024-11-26 18:34:38.715588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.742 [2024-11-26 18:34:38.715628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.742 [2024-11-26 18:34:38.715642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:04.742 [2024-11-26 18:34:38.715653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.742 [2024-11-26 18:34:38.715663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.742 [2024-11-26 18:34:38.715759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.742 [2024-11-26 18:34:38.715782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:04.742 [2024-11-26 18:34:38.715831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.742 [2024-11-26 18:34:38.715842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.742 [2024-11-26 18:34:38.715865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.742 [2024-11-26 18:34:38.715878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:04.742 [2024-11-26 18:34:38.715890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.742 [2024-11-26 18:34:38.715900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.742 [2024-11-26 18:34:38.803150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.742 [2024-11-26 18:34:38.803233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:04.742 [2024-11-26 18:34:38.803275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.742 [2024-11-26 18:34:38.803285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.742 [2024-11-26 18:34:38.873131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.743 [2024-11-26 18:34:38.873196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:04.743 [2024-11-26 18:34:38.873213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.743 [2024-11-26 18:34:38.873224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.743 [2024-11-26 18:34:38.873362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.743 [2024-11-26 18:34:38.873380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:04.743 [2024-11-26 18:34:38.873392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.743 [2024-11-26 18:34:38.873424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.743 [2024-11-26 18:34:38.873517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.743 [2024-11-26 18:34:38.873534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:04.743 [2024-11-26 18:34:38.873546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.743 [2024-11-26 18:34:38.873557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.743 [2024-11-26 18:34:38.873706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.743 [2024-11-26 18:34:38.873729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:04.743 [2024-11-26 18:34:38.873741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.743 [2024-11-26 18:34:38.873752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.743 [2024-11-26 18:34:38.873813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.743 [2024-11-26 18:34:38.873830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:04.743 [2024-11-26 18:34:38.873857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.743 [2024-11-26 18:34:38.873867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.743 [2024-11-26 18:34:38.873915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.743 [2024-11-26 18:34:38.873931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:04.743 [2024-11-26 18:34:38.873957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.743 [2024-11-26 18:34:38.873968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.743 [2024-11-26 18:34:38.874026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:04.743 [2024-11-26 18:34:38.874050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:04.743 [2024-11-26 18:34:38.874063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:04.743 [2024-11-26 18:34:38.874073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:04.743 [2024-11-26 18:34:38.874278] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8052.200 ms, result 0 00:33:08.070 18:34:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:08.070 18:34:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:33:08.070 18:34:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:08.070 18:34:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:08.070 18:34:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:08.070 18:34:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84504 00:33:08.070 18:34:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:08.070 18:34:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84504 00:33:08.070 18:34:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84504 ']' 00:33:08.070 18:34:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:08.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.070 18:34:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.070 18:34:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:08.070 18:34:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.070 18:34:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:08.070 18:34:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:08.329 [2024-11-26 18:34:42.560816] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:33:08.329 [2024-11-26 18:34:42.561008] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84504 ] 00:33:08.329 [2024-11-26 18:34:42.730023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.588 [2024-11-26 18:34:42.832986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.526 [2024-11-26 18:34:43.663264] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:09.526 [2024-11-26 18:34:43.663349] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:09.526 [2024-11-26 18:34:43.809647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.526 [2024-11-26 18:34:43.809689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:09.526 [2024-11-26 18:34:43.809708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:09.526 [2024-11-26 18:34:43.809719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.526 [2024-11-26 18:34:43.809793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.526 [2024-11-26 18:34:43.809812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:09.526 [2024-11-26 18:34:43.809824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:33:09.526 [2024-11-26 18:34:43.809834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.526 [2024-11-26 18:34:43.809863] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:09.526 [2024-11-26 18:34:43.810630] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:09.526 [2024-11-26 18:34:43.810674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.526 [2024-11-26 18:34:43.810687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:09.526 [2024-11-26 18:34:43.810699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.817 ms 00:33:09.526 [2024-11-26 18:34:43.810709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.526 [2024-11-26 18:34:43.812642] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:09.526 [2024-11-26 18:34:43.826174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.526 [2024-11-26 18:34:43.826230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:09.526 [2024-11-26 18:34:43.826247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.534 ms 00:33:09.526 [2024-11-26 18:34:43.826257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.526 [2024-11-26 18:34:43.826321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.526 [2024-11-26 18:34:43.826338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:09.526 [2024-11-26 18:34:43.826349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:33:09.526 [2024-11-26 18:34:43.826359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.526 [2024-11-26 18:34:43.835008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.526 [2024-11-26 18:34:43.835049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:09.526 [2024-11-26 18:34:43.835064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.579 ms 00:33:09.526 [2024-11-26 18:34:43.835074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.527 [2024-11-26 18:34:43.835163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.527 [2024-11-26 18:34:43.835188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:09.527 [2024-11-26 18:34:43.835199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:33:09.527 [2024-11-26 18:34:43.835208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.527 [2024-11-26 18:34:43.835296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.527 [2024-11-26 18:34:43.835317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:09.527 [2024-11-26 18:34:43.835329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:33:09.527 [2024-11-26 18:34:43.835348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.527 [2024-11-26 18:34:43.835396] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:09.527 [2024-11-26 18:34:43.839658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.527 [2024-11-26 18:34:43.839693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:09.527 [2024-11-26 18:34:43.839728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.270 ms 00:33:09.527 [2024-11-26 18:34:43.839738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.527 [2024-11-26 18:34:43.839775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.527 [2024-11-26 18:34:43.839793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:09.527 [2024-11-26 18:34:43.839805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:09.527 [2024-11-26 18:34:43.839814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.527 [2024-11-26 18:34:43.839859] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:09.527 [2024-11-26 18:34:43.839894] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:09.527 [2024-11-26 18:34:43.839932] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:09.527 [2024-11-26 18:34:43.839950] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:09.527 [2024-11-26 18:34:43.840097] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:09.527 [2024-11-26 18:34:43.840120] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:09.527 [2024-11-26 18:34:43.840133] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:09.527 [2024-11-26 18:34:43.840147] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:09.527 [2024-11-26 18:34:43.840166] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:09.527 [2024-11-26 18:34:43.840178] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:09.527 [2024-11-26 18:34:43.840188] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:09.527 [2024-11-26 18:34:43.840199] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:09.527 [2024-11-26 18:34:43.840209] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:09.527 [2024-11-26 18:34:43.840221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.527 [2024-11-26 18:34:43.840230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:09.527 [2024-11-26 18:34:43.840241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.364 ms 00:33:09.527 [2024-11-26 18:34:43.840251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.527 [2024-11-26 18:34:43.840334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.527 [2024-11-26 18:34:43.840348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:09.527 [2024-11-26 18:34:43.840363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:33:09.527 [2024-11-26 18:34:43.840384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.527 [2024-11-26 18:34:43.840486] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:09.527 [2024-11-26 18:34:43.840503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:09.527 [2024-11-26 18:34:43.840515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:09.527 [2024-11-26 18:34:43.840526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.527 [2024-11-26 18:34:43.840536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:09.527 [2024-11-26 18:34:43.840546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:09.527 [2024-11-26 18:34:43.840575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:09.527 [2024-11-26 18:34:43.840588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:09.527 [2024-11-26 18:34:43.840598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:09.527 [2024-11-26 18:34:43.840608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.527 [2024-11-26 18:34:43.840617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:09.527 [2024-11-26 18:34:43.840627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:09.527 [2024-11-26 18:34:43.840636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.527 [2024-11-26 18:34:43.840646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:09.527 [2024-11-26 18:34:43.840657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:09.527 [2024-11-26 18:34:43.840667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.527 [2024-11-26 18:34:43.840676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:09.527 [2024-11-26 18:34:43.840685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:09.527 [2024-11-26 18:34:43.840694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.527 [2024-11-26 18:34:43.840704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:09.527 [2024-11-26 18:34:43.840714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:09.527 [2024-11-26 18:34:43.840723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:09.527 [2024-11-26 18:34:43.840733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:09.527 [2024-11-26 18:34:43.840755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:09.527 [2024-11-26 18:34:43.840765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:09.527 [2024-11-26 18:34:43.840775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:09.527 [2024-11-26 18:34:43.840785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:09.527 [2024-11-26 18:34:43.840794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:09.527 [2024-11-26 18:34:43.840803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:09.527 [2024-11-26 18:34:43.840813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:09.527 [2024-11-26 18:34:43.840822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:09.527 [2024-11-26 18:34:43.840831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:09.527 [2024-11-26 18:34:43.840840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:09.527 [2024-11-26 18:34:43.840850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.527 [2024-11-26 18:34:43.840859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:09.527 [2024-11-26 18:34:43.840869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:09.527 [2024-11-26 18:34:43.840878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.527 [2024-11-26 18:34:43.840887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:09.527 [2024-11-26 18:34:43.840896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:09.527 [2024-11-26 18:34:43.840906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.527 [2024-11-26 18:34:43.840915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:09.527 [2024-11-26 18:34:43.840924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:09.527 [2024-11-26 18:34:43.840933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.527 [2024-11-26 18:34:43.840943] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:09.527 [2024-11-26 18:34:43.840953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:09.527 [2024-11-26 18:34:43.840964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:09.527 [2024-11-26 18:34:43.840984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:09.527 [2024-11-26 18:34:43.840996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:09.527 [2024-11-26 18:34:43.841006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:09.527 [2024-11-26 18:34:43.841015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:09.527 [2024-11-26 18:34:43.841025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:09.527 [2024-11-26 18:34:43.841035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:09.527 [2024-11-26 18:34:43.841044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:09.527 [2024-11-26 18:34:43.841055] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:09.527 [2024-11-26 18:34:43.841068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:09.527 [2024-11-26 18:34:43.841079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:09.527 [2024-11-26 18:34:43.841089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:09.527 [2024-11-26 18:34:43.841100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:09.527 [2024-11-26 18:34:43.841109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:09.527 [2024-11-26 18:34:43.841121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:09.527 [2024-11-26 18:34:43.841131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:09.527 [2024-11-26 18:34:43.841141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:09.527 [2024-11-26 18:34:43.841151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:09.527 [2024-11-26 18:34:43.841162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:09.528 [2024-11-26 18:34:43.841172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:09.528 [2024-11-26 18:34:43.841182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:09.528 [2024-11-26 18:34:43.841192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:09.528 [2024-11-26 18:34:43.841201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:09.528 [2024-11-26 18:34:43.841211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:09.528 [2024-11-26 18:34:43.841220] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:09.528 [2024-11-26 18:34:43.841232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:09.528 [2024-11-26 18:34:43.841243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:09.528 [2024-11-26 18:34:43.841253] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:09.528 [2024-11-26 18:34:43.841264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:09.528 [2024-11-26 18:34:43.841273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:09.528 [2024-11-26 18:34:43.841283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:09.528 [2024-11-26 18:34:43.841294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:09.528 [2024-11-26 18:34:43.841305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.855 ms 00:33:09.528 [2024-11-26 18:34:43.841322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:09.528 [2024-11-26 18:34:43.841380] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:33:09.528 [2024-11-26 18:34:43.841402] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:33:12.818 [2024-11-26 18:34:47.240964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.818 [2024-11-26 18:34:47.241037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:33:12.818 [2024-11-26 18:34:47.241074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3399.602 ms 00:33:12.818 [2024-11-26 18:34:47.241085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.818 [2024-11-26 18:34:47.274716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.818 [2024-11-26 18:34:47.274774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:12.818 [2024-11-26 18:34:47.274827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.249 ms 00:33:12.818 [2024-11-26 18:34:47.274839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:12.818 [2024-11-26 18:34:47.274969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:12.818 [2024-11-26 18:34:47.274987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:12.818 [2024-11-26 18:34:47.275001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:33:12.818 [2024-11-26 18:34:47.275011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.078 [2024-11-26 18:34:47.312549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.078 [2024-11-26 18:34:47.312611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:13.078 [2024-11-26 18:34:47.312650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.463 ms 00:33:13.078 [2024-11-26 18:34:47.312661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.078 [2024-11-26 18:34:47.312721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.078 [2024-11-26 18:34:47.312736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:13.078 [2024-11-26 18:34:47.312749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:13.078 [2024-11-26 18:34:47.312760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.078 [2024-11-26 18:34:47.313395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.078 [2024-11-26 18:34:47.313422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:13.078 [2024-11-26 18:34:47.313435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.540 ms 00:33:13.078 [2024-11-26 18:34:47.313452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.078 [2024-11-26 18:34:47.313510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.078 [2024-11-26 18:34:47.313525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:13.078 [2024-11-26 18:34:47.313537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:33:13.078 [2024-11-26 18:34:47.313547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.078 [2024-11-26 18:34:47.332339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.078 [2024-11-26 18:34:47.332386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:13.078 [2024-11-26 18:34:47.332417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.739 ms 00:33:13.078 [2024-11-26 18:34:47.332428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.078 [2024-11-26 18:34:47.355719] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:33:13.078 [2024-11-26 18:34:47.355762] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:13.078 [2024-11-26 18:34:47.355796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.078 [2024-11-26 18:34:47.355807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:33:13.078 [2024-11-26 18:34:47.355820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.213 ms 00:33:13.078 [2024-11-26 18:34:47.355830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.078 [2024-11-26 18:34:47.369946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.078 [2024-11-26 18:34:47.370005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:33:13.078 [2024-11-26 18:34:47.370037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.068 ms 00:33:13.078 [2024-11-26 18:34:47.370047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.079 [2024-11-26 18:34:47.382085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.079 [2024-11-26 18:34:47.382123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:33:13.079 [2024-11-26 18:34:47.382153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.004 ms 00:33:13.079 [2024-11-26 18:34:47.382163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.079 [2024-11-26 18:34:47.394364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.079 [2024-11-26 18:34:47.394401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:33:13.079 [2024-11-26 18:34:47.394432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.158 ms 00:33:13.079 [2024-11-26 18:34:47.394443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.079 [2024-11-26 18:34:47.395194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.079 [2024-11-26 18:34:47.395235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:13.079 [2024-11-26 18:34:47.395251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.622 ms 00:33:13.079 [2024-11-26 18:34:47.395262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.079 [2024-11-26 18:34:47.461592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.079 [2024-11-26 18:34:47.461664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:13.079 [2024-11-26 18:34:47.461684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 66.271 ms 00:33:13.079 [2024-11-26 18:34:47.461695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.079 [2024-11-26 18:34:47.471442] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:13.079 [2024-11-26 18:34:47.472171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.079 [2024-11-26 18:34:47.472203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:13.079 [2024-11-26 18:34:47.472234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.410 ms 00:33:13.079 [2024-11-26 18:34:47.472244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.079 [2024-11-26 18:34:47.472336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.079 [2024-11-26 18:34:47.472387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:33:13.079 [2024-11-26 18:34:47.472400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:33:13.079 [2024-11-26 18:34:47.472410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.079 [2024-11-26 18:34:47.472516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.079 [2024-11-26 18:34:47.472540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:13.079 [2024-11-26 18:34:47.472569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:33:13.079 [2024-11-26 18:34:47.472584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.079 [2024-11-26 18:34:47.472621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.079 [2024-11-26 18:34:47.472636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:13.079 [2024-11-26 18:34:47.472654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:13.079 [2024-11-26 18:34:47.472664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.079 [2024-11-26 18:34:47.472710] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:13.079 [2024-11-26 18:34:47.472727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.079 [2024-11-26 18:34:47.472739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:13.079 [2024-11-26 18:34:47.472750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:33:13.079 [2024-11-26 18:34:47.472761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.079 [2024-11-26 18:34:47.497605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.079 [2024-11-26 18:34:47.497649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:33:13.079 [2024-11-26 18:34:47.497665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.817 ms 00:33:13.079 [2024-11-26 18:34:47.497676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.079 [2024-11-26 18:34:47.497759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.079 [2024-11-26 18:34:47.497776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:13.079 [2024-11-26 18:34:47.497788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:33:13.079 [2024-11-26 18:34:47.497798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.079 [2024-11-26 18:34:47.499414] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3689.147 ms, result 0 00:33:13.079 [2024-11-26 18:34:47.514011] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.079 [2024-11-26 18:34:47.530053] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:13.338 [2024-11-26 18:34:47.538212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:13.338 18:34:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:13.338 18:34:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:13.338 18:34:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:13.338 18:34:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:13.338 18:34:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:33:13.598 [2024-11-26 18:34:47.834275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.598 [2024-11-26 18:34:47.834327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:33:13.598 [2024-11-26 18:34:47.834368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:33:13.598 [2024-11-26 18:34:47.834379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.598 [2024-11-26 18:34:47.834410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.598 [2024-11-26 18:34:47.834425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:33:13.598 [2024-11-26 18:34:47.834438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:13.598 [2024-11-26 18:34:47.834448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.598 [2024-11-26 18:34:47.834472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:13.598 [2024-11-26 18:34:47.834486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:33:13.598 [2024-11-26 18:34:47.834497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:33:13.598 [2024-11-26 18:34:47.834507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:13.598 [2024-11-26 18:34:47.834590] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.291 ms, result 0 00:33:13.598 true 00:33:13.598 18:34:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:13.858 { 00:33:13.858 "name": "ftl", 00:33:13.858 "properties": [ 00:33:13.858 { 00:33:13.858 "name": "superblock_version", 00:33:13.858 "value": 5, 00:33:13.858 "read-only": true 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "name": "base_device", 00:33:13.858 "bands": [ 00:33:13.858 { 00:33:13.858 "id": 0, 00:33:13.858 "state": "CLOSED", 00:33:13.858 "validity": 1.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 1, 00:33:13.858 "state": "CLOSED", 00:33:13.858 "validity": 1.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 2, 00:33:13.858 "state": "CLOSED", 00:33:13.858 "validity": 0.007843137254901933 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 3, 00:33:13.858 "state": "FREE", 00:33:13.858 "validity": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 4, 00:33:13.858 "state": "FREE", 00:33:13.858 "validity": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 5, 00:33:13.858 "state": "FREE", 00:33:13.858 "validity": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 6, 00:33:13.858 "state": "FREE", 00:33:13.858 "validity": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 7, 00:33:13.858 "state": "FREE", 00:33:13.858 "validity": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 8, 00:33:13.858 "state": "FREE", 00:33:13.858 "validity": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 9, 00:33:13.858 "state": "FREE", 00:33:13.858 "validity": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 10, 00:33:13.858 "state": "FREE", 00:33:13.858 "validity": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 11, 00:33:13.858 "state": "FREE", 00:33:13.858 "validity": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 12, 00:33:13.858 "state": "FREE", 00:33:13.858 "validity": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 13, 00:33:13.858 "state": "FREE", 00:33:13.858 "validity": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 14, 00:33:13.858 "state": "FREE", 00:33:13.858 "validity": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 15, 00:33:13.858 "state": "FREE", 00:33:13.858 "validity": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 16, 00:33:13.858 "state": "FREE", 00:33:13.858 "validity": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 17, 00:33:13.858 "state": "FREE", 00:33:13.858 "validity": 0.0 00:33:13.858 } 00:33:13.858 ], 00:33:13.858 "read-only": true 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "name": "cache_device", 00:33:13.858 "type": "bdev", 00:33:13.858 "chunks": [ 00:33:13.858 { 00:33:13.858 "id": 0, 00:33:13.858 "state": "INACTIVE", 00:33:13.858 "utilization": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 1, 00:33:13.858 "state": "OPEN", 00:33:13.858 "utilization": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 2, 00:33:13.858 "state": "OPEN", 00:33:13.858 "utilization": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 3, 00:33:13.858 "state": "FREE", 00:33:13.858 "utilization": 0.0 00:33:13.858 }, 00:33:13.858 { 00:33:13.858 "id": 4, 00:33:13.858 "state": "FREE", 00:33:13.859 "utilization": 0.0 00:33:13.859 } 00:33:13.859 ], 00:33:13.859 "read-only": true 00:33:13.859 }, 00:33:13.859 { 00:33:13.859 "name": "verbose_mode", 00:33:13.859 "value": true, 00:33:13.859 "unit": "", 00:33:13.859 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:33:13.859 }, 00:33:13.859 { 00:33:13.859 "name": "prep_upgrade_on_shutdown", 00:33:13.859 "value": false, 00:33:13.859 "unit": "", 00:33:13.859 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:33:13.859 } 00:33:13.859 ] 00:33:13.859 } 00:33:13.859 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:33:13.859 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:13.859 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:33:14.118 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:33:14.118 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:33:14.118 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:33:14.119 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:33:14.119 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:33:14.378 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:33:14.378 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:33:14.378 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:33:14.378 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:14.378 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:14.378 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:14.378 Validate MD5 checksum, iteration 1 00:33:14.378 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:14.378 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:14.378 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:14.378 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:14.378 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:14.378 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:14.378 18:34:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:14.378 [2024-11-26 18:34:48.723677] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:33:14.378 [2024-11-26 18:34:48.723860] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84591 ] 00:33:14.637 [2024-11-26 18:34:48.915768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.637 [2024-11-26 18:34:49.079292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:16.540  [2024-11-26T18:34:51.937Z] Copying: 489/1024 [MB] (489 MBps) [2024-11-26T18:34:51.937Z] Copying: 973/1024 [MB] (484 MBps) [2024-11-26T18:34:53.313Z] Copying: 1024/1024 [MB] (average 485 MBps) 00:33:18.852 00:33:18.852 18:34:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:18.852 18:34:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:20.753 18:34:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:20.753 Validate MD5 checksum, iteration 2 00:33:20.753 18:34:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a120c35d1af93e712eb91256f9eef129 00:33:20.753 18:34:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a120c35d1af93e712eb91256f9eef129 != \a\1\2\0\c\3\5\d\1\a\f\9\3\e\7\1\2\e\b\9\1\2\5\6\f\9\e\e\f\1\2\9 ]] 00:33:20.753 18:34:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:20.753 18:34:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:20.753 18:34:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:20.753 18:34:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:20.753 18:34:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:20.753 18:34:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:20.753 18:34:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:20.753 18:34:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:20.753 18:34:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:20.753 [2024-11-26 18:34:55.085681] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:33:20.753 [2024-11-26 18:34:55.085867] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84654 ] 00:33:21.012 [2024-11-26 18:34:55.266005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.012 [2024-11-26 18:34:55.423132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.916  [2024-11-26T18:34:58.313Z] Copying: 470/1024 [MB] (470 MBps) [2024-11-26T18:34:58.313Z] Copying: 919/1024 [MB] (449 MBps) [2024-11-26T18:35:00.845Z] Copying: 1024/1024 [MB] (average 454 MBps) 00:33:26.384 00:33:26.384 18:35:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:26.384 18:35:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b9672791b0e9cd2ec74b021a236eb3d9 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b9672791b0e9cd2ec74b021a236eb3d9 != \b\9\6\7\2\7\9\1\b\0\e\9\c\d\2\e\c\7\4\b\0\2\1\a\2\3\6\e\b\3\d\9 ]] 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84504 ]] 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84504 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84733 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84733 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84733 ']' 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:27.760 18:35:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:28.018 [2024-11-26 18:35:02.307470] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:33:28.018 [2024-11-26 18:35:02.307674] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84733 ] 00:33:28.018 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84504 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:33:28.276 [2024-11-26 18:35:02.486301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.276 [2024-11-26 18:35:02.588098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:29.211 [2024-11-26 18:35:03.412984] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:29.212 [2024-11-26 18:35:03.413086] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:33:29.212 [2024-11-26 18:35:03.559165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.212 [2024-11-26 18:35:03.559208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:33:29.212 [2024-11-26 18:35:03.559243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:29.212 [2024-11-26 18:35:03.559254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.212 [2024-11-26 18:35:03.559333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.212 [2024-11-26 18:35:03.559353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:29.212 [2024-11-26 18:35:03.559365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:33:29.212 [2024-11-26 18:35:03.559374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.212 [2024-11-26 18:35:03.559405] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:33:29.212 [2024-11-26 18:35:03.560362] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:33:29.212 [2024-11-26 18:35:03.560419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.212 [2024-11-26 18:35:03.560432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:29.212 [2024-11-26 18:35:03.560444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.020 ms 00:33:29.212 [2024-11-26 18:35:03.560454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.212 [2024-11-26 18:35:03.561057] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:33:29.212 [2024-11-26 18:35:03.579815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.212 [2024-11-26 18:35:03.579853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:33:29.212 [2024-11-26 18:35:03.579885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.774 ms 00:33:29.212 [2024-11-26 18:35:03.579896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.212 [2024-11-26 18:35:03.589271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.212 [2024-11-26 18:35:03.589311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:33:29.212 [2024-11-26 18:35:03.589342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:33:29.212 [2024-11-26 18:35:03.589352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.212 [2024-11-26 18:35:03.589864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.212 [2024-11-26 18:35:03.589901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:29.212 [2024-11-26 18:35:03.589915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.417 ms 00:33:29.212 [2024-11-26 18:35:03.589954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.212 [2024-11-26 18:35:03.590028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.212 [2024-11-26 18:35:03.590047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:29.212 [2024-11-26 18:35:03.590074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:33:29.212 [2024-11-26 18:35:03.590085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.212 [2024-11-26 18:35:03.590120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.212 [2024-11-26 18:35:03.590135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:33:29.212 [2024-11-26 18:35:03.590147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:33:29.212 [2024-11-26 18:35:03.590157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.212 [2024-11-26 18:35:03.590191] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:33:29.212 [2024-11-26 18:35:03.593526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.212 [2024-11-26 18:35:03.593584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:29.212 [2024-11-26 18:35:03.593616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.342 ms 00:33:29.212 [2024-11-26 18:35:03.593633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.212 [2024-11-26 18:35:03.593664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.212 [2024-11-26 18:35:03.593679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:33:29.212 [2024-11-26 18:35:03.593690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:29.212 [2024-11-26 18:35:03.593701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.212 [2024-11-26 18:35:03.593745] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:33:29.212 [2024-11-26 18:35:03.593773] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:33:29.212 [2024-11-26 18:35:03.593842] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:33:29.212 [2024-11-26 18:35:03.593865] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:33:29.212 [2024-11-26 18:35:03.593970] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:33:29.212 [2024-11-26 18:35:03.593984] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:33:29.212 [2024-11-26 18:35:03.593998] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:33:29.212 [2024-11-26 18:35:03.594012] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:33:29.212 [2024-11-26 18:35:03.594024] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:33:29.212 [2024-11-26 18:35:03.594035] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:33:29.212 [2024-11-26 18:35:03.594045] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:33:29.212 [2024-11-26 18:35:03.594056] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:33:29.212 [2024-11-26 18:35:03.594066] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:33:29.212 [2024-11-26 18:35:03.594082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.212 [2024-11-26 18:35:03.594093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:33:29.212 [2024-11-26 18:35:03.594104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.340 ms 00:33:29.212 [2024-11-26 18:35:03.594115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.212 [2024-11-26 18:35:03.594200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.212 [2024-11-26 18:35:03.594213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:33:29.212 [2024-11-26 18:35:03.594225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:33:29.212 [2024-11-26 18:35:03.594235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.212 [2024-11-26 18:35:03.594337] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:33:29.212 [2024-11-26 18:35:03.594368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:33:29.212 [2024-11-26 18:35:03.594383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:29.212 [2024-11-26 18:35:03.594394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.212 [2024-11-26 18:35:03.594406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:33:29.212 [2024-11-26 18:35:03.594416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:33:29.212 [2024-11-26 18:35:03.594425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:33:29.212 [2024-11-26 18:35:03.594435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:33:29.212 [2024-11-26 18:35:03.594444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:33:29.212 [2024-11-26 18:35:03.594453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.212 [2024-11-26 18:35:03.594463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:33:29.212 [2024-11-26 18:35:03.594472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:33:29.212 [2024-11-26 18:35:03.594482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.212 [2024-11-26 18:35:03.594492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:33:29.212 [2024-11-26 18:35:03.594502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:33:29.212 [2024-11-26 18:35:03.594528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.212 [2024-11-26 18:35:03.594543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:33:29.212 [2024-11-26 18:35:03.594567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:33:29.213 [2024-11-26 18:35:03.594580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.213 [2024-11-26 18:35:03.594590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:33:29.213 [2024-11-26 18:35:03.594600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:33:29.213 [2024-11-26 18:35:03.594624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:29.213 [2024-11-26 18:35:03.594635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:33:29.213 [2024-11-26 18:35:03.594646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:33:29.213 [2024-11-26 18:35:03.594655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:29.213 [2024-11-26 18:35:03.594665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:33:29.213 [2024-11-26 18:35:03.594675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:33:29.213 [2024-11-26 18:35:03.594684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:29.213 [2024-11-26 18:35:03.594694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:33:29.213 [2024-11-26 18:35:03.594704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:33:29.213 [2024-11-26 18:35:03.594714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:33:29.213 [2024-11-26 18:35:03.594724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:33:29.213 [2024-11-26 18:35:03.594733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:33:29.213 [2024-11-26 18:35:03.594743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.213 [2024-11-26 18:35:03.594753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:33:29.213 [2024-11-26 18:35:03.594763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:33:29.213 [2024-11-26 18:35:03.594773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.213 [2024-11-26 18:35:03.594782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:33:29.213 [2024-11-26 18:35:03.594792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:33:29.213 [2024-11-26 18:35:03.594802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.213 [2024-11-26 18:35:03.594824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:33:29.213 [2024-11-26 18:35:03.594835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:33:29.213 [2024-11-26 18:35:03.594845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.213 [2024-11-26 18:35:03.594855] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:33:29.213 [2024-11-26 18:35:03.594866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:33:29.213 [2024-11-26 18:35:03.594876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:33:29.213 [2024-11-26 18:35:03.594886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:33:29.213 [2024-11-26 18:35:03.594899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:33:29.213 [2024-11-26 18:35:03.594910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:33:29.213 [2024-11-26 18:35:03.594920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:33:29.213 [2024-11-26 18:35:03.594931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:33:29.213 [2024-11-26 18:35:03.594941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:33:29.213 [2024-11-26 18:35:03.594951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:33:29.213 [2024-11-26 18:35:03.594962] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:33:29.213 [2024-11-26 18:35:03.594976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:29.213 [2024-11-26 18:35:03.594989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:33:29.213 [2024-11-26 18:35:03.595000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:33:29.213 [2024-11-26 18:35:03.595010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:33:29.213 [2024-11-26 18:35:03.595021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:33:29.213 [2024-11-26 18:35:03.595031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:33:29.213 [2024-11-26 18:35:03.595041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:33:29.213 [2024-11-26 18:35:03.595052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:33:29.213 [2024-11-26 18:35:03.595063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:33:29.213 [2024-11-26 18:35:03.595073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:33:29.213 [2024-11-26 18:35:03.595083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:33:29.213 [2024-11-26 18:35:03.595093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:33:29.213 [2024-11-26 18:35:03.595103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:33:29.213 [2024-11-26 18:35:03.595113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:33:29.213 [2024-11-26 18:35:03.595124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:33:29.213 [2024-11-26 18:35:03.595134] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:33:29.213 [2024-11-26 18:35:03.595146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:29.213 [2024-11-26 18:35:03.595165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:29.213 [2024-11-26 18:35:03.595176] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:33:29.213 [2024-11-26 18:35:03.595186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:33:29.213 [2024-11-26 18:35:03.595198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:33:29.213 [2024-11-26 18:35:03.595210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.213 [2024-11-26 18:35:03.595221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:33:29.213 [2024-11-26 18:35:03.595232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.935 ms 00:33:29.213 [2024-11-26 18:35:03.595243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.213 [2024-11-26 18:35:03.626659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.213 [2024-11-26 18:35:03.626710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:29.213 [2024-11-26 18:35:03.626744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.348 ms 00:33:29.213 [2024-11-26 18:35:03.626755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.213 [2024-11-26 18:35:03.626818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.213 [2024-11-26 18:35:03.626835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:33:29.213 [2024-11-26 18:35:03.626846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:33:29.213 [2024-11-26 18:35:03.626857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.213 [2024-11-26 18:35:03.665009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.213 [2024-11-26 18:35:03.665055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:29.213 [2024-11-26 18:35:03.665088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.077 ms 00:33:29.213 [2024-11-26 18:35:03.665099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.213 [2024-11-26 18:35:03.665152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.213 [2024-11-26 18:35:03.665172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:29.213 [2024-11-26 18:35:03.665184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:29.213 [2024-11-26 18:35:03.665201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.213 [2024-11-26 18:35:03.665412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.213 [2024-11-26 18:35:03.665432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:29.213 [2024-11-26 18:35:03.665445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.074 ms 00:33:29.213 [2024-11-26 18:35:03.665456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.213 [2024-11-26 18:35:03.665517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.213 [2024-11-26 18:35:03.665533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:29.213 [2024-11-26 18:35:03.665545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:33:29.213 [2024-11-26 18:35:03.665564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.473 [2024-11-26 18:35:03.685021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.473 [2024-11-26 18:35:03.685076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:29.473 [2024-11-26 18:35:03.685123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.426 ms 00:33:29.473 [2024-11-26 18:35:03.685140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.473 [2024-11-26 18:35:03.685303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.473 [2024-11-26 18:35:03.685357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:33:29.473 [2024-11-26 18:35:03.685371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:33:29.473 [2024-11-26 18:35:03.685382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.473 [2024-11-26 18:35:03.717396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.473 [2024-11-26 18:35:03.717454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:33:29.473 [2024-11-26 18:35:03.717502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.987 ms 00:33:29.473 [2024-11-26 18:35:03.717514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.473 [2024-11-26 18:35:03.727394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.473 [2024-11-26 18:35:03.727441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:33:29.473 [2024-11-26 18:35:03.727472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.522 ms 00:33:29.473 [2024-11-26 18:35:03.727483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.473 [2024-11-26 18:35:03.792400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.473 [2024-11-26 18:35:03.792484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:33:29.473 [2024-11-26 18:35:03.792521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 64.846 ms 00:33:29.473 [2024-11-26 18:35:03.792533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.473 [2024-11-26 18:35:03.792793] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:33:29.473 [2024-11-26 18:35:03.792957] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:33:29.473 [2024-11-26 18:35:03.793109] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:33:29.473 [2024-11-26 18:35:03.793232] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:33:29.473 [2024-11-26 18:35:03.793247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.473 [2024-11-26 18:35:03.793260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:33:29.473 [2024-11-26 18:35:03.793274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.628 ms 00:33:29.473 [2024-11-26 18:35:03.793286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.473 [2024-11-26 18:35:03.793399] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:33:29.473 [2024-11-26 18:35:03.793431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.473 [2024-11-26 18:35:03.793449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:33:29.473 [2024-11-26 18:35:03.793462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:33:29.473 [2024-11-26 18:35:03.793473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.473 [2024-11-26 18:35:03.809271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.473 [2024-11-26 18:35:03.809317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:33:29.473 [2024-11-26 18:35:03.809350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.759 ms 00:33:29.473 [2024-11-26 18:35:03.809361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.473 [2024-11-26 18:35:03.818549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.473 [2024-11-26 18:35:03.818603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:33:29.473 [2024-11-26 18:35:03.818633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:33:29.473 [2024-11-26 18:35:03.818643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:29.473 [2024-11-26 18:35:03.818759] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:33:29.473 [2024-11-26 18:35:03.819068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:29.473 [2024-11-26 18:35:03.819095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:29.473 [2024-11-26 18:35:03.819109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.312 ms 00:33:29.473 [2024-11-26 18:35:03.819120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.040 [2024-11-26 18:35:04.446748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.040 [2024-11-26 18:35:04.446906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:30.040 [2024-11-26 18:35:04.446945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 626.599 ms 00:33:30.040 [2024-11-26 18:35:04.446958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.040 [2024-11-26 18:35:04.451427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.040 [2024-11-26 18:35:04.451489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:30.040 [2024-11-26 18:35:04.451536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.112 ms 00:33:30.040 [2024-11-26 18:35:04.451557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.040 [2024-11-26 18:35:04.452166] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:33:30.040 [2024-11-26 18:35:04.452224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.040 [2024-11-26 18:35:04.452239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:30.040 [2024-11-26 18:35:04.452253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.614 ms 00:33:30.040 [2024-11-26 18:35:04.452266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.040 [2024-11-26 18:35:04.452324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.040 [2024-11-26 18:35:04.452343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:30.040 [2024-11-26 18:35:04.452356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:30.040 [2024-11-26 18:35:04.452405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.040 [2024-11-26 18:35:04.452499] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 633.717 ms, result 0 00:33:30.040 [2024-11-26 18:35:04.452573] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:33:30.040 [2024-11-26 18:35:04.452714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.040 [2024-11-26 18:35:04.452731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:33:30.040 [2024-11-26 18:35:04.452743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.142 ms 00:33:30.040 [2024-11-26 18:35:04.452753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.976 [2024-11-26 18:35:05.082239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.976 [2024-11-26 18:35:05.082395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:33:30.976 [2024-11-26 18:35:05.082455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 628.400 ms 00:33:30.976 [2024-11-26 18:35:05.082467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.976 [2024-11-26 18:35:05.086940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.976 [2024-11-26 18:35:05.086984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:33:30.976 [2024-11-26 18:35:05.087001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.077 ms 00:33:30.976 [2024-11-26 18:35:05.087012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.976 [2024-11-26 18:35:05.087558] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:33:30.976 [2024-11-26 18:35:05.087616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.976 [2024-11-26 18:35:05.087632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:33:30.976 [2024-11-26 18:35:05.087646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.534 ms 00:33:30.976 [2024-11-26 18:35:05.087673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.976 [2024-11-26 18:35:05.087718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.977 [2024-11-26 18:35:05.087750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:33:30.977 [2024-11-26 18:35:05.087778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:33:30.977 [2024-11-26 18:35:05.087789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.977 [2024-11-26 18:35:05.087842] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 635.268 ms, result 0 00:33:30.977 [2024-11-26 18:35:05.087895] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:30.977 [2024-11-26 18:35:05.087913] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:33:30.977 [2024-11-26 18:35:05.087927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.977 [2024-11-26 18:35:05.087940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:33:30.977 [2024-11-26 18:35:05.087951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1269.206 ms 00:33:30.977 [2024-11-26 18:35:05.087962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.977 [2024-11-26 18:35:05.088000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.977 [2024-11-26 18:35:05.088022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:33:30.977 [2024-11-26 18:35:05.088034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:33:30.977 [2024-11-26 18:35:05.088044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.977 [2024-11-26 18:35:05.099362] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:33:30.977 [2024-11-26 18:35:05.099543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.977 [2024-11-26 18:35:05.099561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:33:30.977 [2024-11-26 18:35:05.099594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.476 ms 00:33:30.977 [2024-11-26 18:35:05.099606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.977 [2024-11-26 18:35:05.100402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.977 [2024-11-26 18:35:05.100438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:33:30.977 [2024-11-26 18:35:05.100469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.640 ms 00:33:30.977 [2024-11-26 18:35:05.100479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.977 [2024-11-26 18:35:05.102534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.977 [2024-11-26 18:35:05.102585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:33:30.977 [2024-11-26 18:35:05.102614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.025 ms 00:33:30.977 [2024-11-26 18:35:05.102624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.977 [2024-11-26 18:35:05.102676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.977 [2024-11-26 18:35:05.102693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:33:30.977 [2024-11-26 18:35:05.102711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:33:30.977 [2024-11-26 18:35:05.102721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.977 [2024-11-26 18:35:05.102888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.977 [2024-11-26 18:35:05.102908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:33:30.977 [2024-11-26 18:35:05.102920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:33:30.977 [2024-11-26 18:35:05.102931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.977 [2024-11-26 18:35:05.102964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.977 [2024-11-26 18:35:05.102978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:33:30.977 [2024-11-26 18:35:05.102989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:33:30.977 [2024-11-26 18:35:05.102999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.977 [2024-11-26 18:35:05.103042] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:33:30.977 [2024-11-26 18:35:05.103058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.977 [2024-11-26 18:35:05.103069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:33:30.977 [2024-11-26 18:35:05.103081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:33:30.977 [2024-11-26 18:35:05.103091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.977 [2024-11-26 18:35:05.103163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:30.977 [2024-11-26 18:35:05.103181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:33:30.977 [2024-11-26 18:35:05.103192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:33:30.977 [2024-11-26 18:35:05.103202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:30.977 [2024-11-26 18:35:05.104520] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1544.867 ms, result 0 00:33:30.977 [2024-11-26 18:35:05.120097] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:30.977 [2024-11-26 18:35:05.136101] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:33:30.977 [2024-11-26 18:35:05.145264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:30.977 18:35:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:30.977 18:35:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:33:30.977 18:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:33:30.977 18:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:33:30.977 18:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:33:30.977 18:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:33:30.977 18:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:33:30.977 18:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:30.977 Validate MD5 checksum, iteration 1 00:33:30.977 18:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:33:30.977 18:35:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:30.977 18:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:30.977 18:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:30.977 18:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:30.977 18:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:30.977 18:35:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:33:30.977 [2024-11-26 18:35:05.257909] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:33:30.977 [2024-11-26 18:35:05.258080] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84768 ] 00:33:30.977 [2024-11-26 18:35:05.434232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.236 [2024-11-26 18:35:05.581220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.140  [2024-11-26T18:35:08.537Z] Copying: 459/1024 [MB] (459 MBps) [2024-11-26T18:35:08.537Z] Copying: 911/1024 [MB] (452 MBps) [2024-11-26T18:35:09.910Z] Copying: 1024/1024 [MB] (average 454 MBps) 00:33:35.449 00:33:35.449 18:35:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:33:35.449 18:35:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:37.350 18:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:37.350 Validate MD5 checksum, iteration 2 00:33:37.350 18:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=a120c35d1af93e712eb91256f9eef129 00:33:37.350 18:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ a120c35d1af93e712eb91256f9eef129 != \a\1\2\0\c\3\5\d\1\a\f\9\3\e\7\1\2\e\b\9\1\2\5\6\f\9\e\e\f\1\2\9 ]] 00:33:37.350 18:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:37.350 18:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:37.350 18:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:33:37.350 18:35:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:37.350 18:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:33:37.350 18:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:33:37.350 18:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:33:37.350 18:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:33:37.350 18:35:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:33:37.350 [2024-11-26 18:35:11.678570] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:33:37.350 [2024-11-26 18:35:11.678759] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84832 ] 00:33:37.608 [2024-11-26 18:35:11.859317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:37.608 [2024-11-26 18:35:12.018114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.548  [2024-11-26T18:35:14.946Z] Copying: 515/1024 [MB] (515 MBps) [2024-11-26T18:35:15.882Z] Copying: 1024/1024 [MB] (average 516 MBps) 00:33:41.421 00:33:41.421 18:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:33:41.421 18:35:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b9672791b0e9cd2ec74b021a236eb3d9 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b9672791b0e9cd2ec74b021a236eb3d9 != \b\9\6\7\2\7\9\1\b\0\e\9\c\d\2\e\c\7\4\b\0\2\1\a\2\3\6\e\b\3\d\9 ]] 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84733 ]] 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84733 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84733 ']' 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84733 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:33:43.326 18:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:43.585 18:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84733 00:33:43.585 killing process with pid 84733 00:33:43.585 18:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:43.585 18:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:43.585 18:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84733' 00:33:43.585 18:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84733 00:33:43.585 18:35:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84733 00:33:44.520 [2024-11-26 18:35:18.655867] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:33:44.520 [2024-11-26 18:35:18.672076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.520 [2024-11-26 18:35:18.672125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:33:44.520 [2024-11-26 18:35:18.672160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:33:44.520 [2024-11-26 18:35:18.672171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.520 [2024-11-26 18:35:18.672201] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:33:44.520 [2024-11-26 18:35:18.675622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.520 [2024-11-26 18:35:18.675876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:33:44.520 [2024-11-26 18:35:18.675928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.399 ms 00:33:44.520 [2024-11-26 18:35:18.675955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.520 [2024-11-26 18:35:18.676250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.520 [2024-11-26 18:35:18.676270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:33:44.520 [2024-11-26 18:35:18.676283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.253 ms 00:33:44.520 [2024-11-26 18:35:18.676294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.520 [2024-11-26 18:35:18.677619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.520 [2024-11-26 18:35:18.677673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:33:44.520 [2024-11-26 18:35:18.677705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.304 ms 00:33:44.520 [2024-11-26 18:35:18.677739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.520 [2024-11-26 18:35:18.678940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.520 [2024-11-26 18:35:18.679162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:33:44.520 [2024-11-26 18:35:18.679200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.158 ms 00:33:44.520 [2024-11-26 18:35:18.679223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.520 [2024-11-26 18:35:18.710418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.520 [2024-11-26 18:35:18.710461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:33:44.520 [2024-11-26 18:35:18.710500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.124 ms 00:33:44.520 [2024-11-26 18:35:18.710511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.521 [2024-11-26 18:35:18.717084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.521 [2024-11-26 18:35:18.717126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:33:44.521 [2024-11-26 18:35:18.717158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.531 ms 00:33:44.521 [2024-11-26 18:35:18.717170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.521 [2024-11-26 18:35:18.717247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.521 [2024-11-26 18:35:18.717266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:33:44.521 [2024-11-26 18:35:18.717279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:33:44.521 [2024-11-26 18:35:18.717298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.521 [2024-11-26 18:35:18.728101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.521 [2024-11-26 18:35:18.728142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:33:44.521 [2024-11-26 18:35:18.728172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.782 ms 00:33:44.521 [2024-11-26 18:35:18.728182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.521 [2024-11-26 18:35:18.738526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.521 [2024-11-26 18:35:18.738609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:33:44.521 [2024-11-26 18:35:18.738641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.292 ms 00:33:44.521 [2024-11-26 18:35:18.738652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.521 [2024-11-26 18:35:18.748847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.521 [2024-11-26 18:35:18.748886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:33:44.521 [2024-11-26 18:35:18.748917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.156 ms 00:33:44.521 [2024-11-26 18:35:18.748927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.521 [2024-11-26 18:35:18.759231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.521 [2024-11-26 18:35:18.759272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:33:44.521 [2024-11-26 18:35:18.759302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.240 ms 00:33:44.521 [2024-11-26 18:35:18.759311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.521 [2024-11-26 18:35:18.759348] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:33:44.521 [2024-11-26 18:35:18.759370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:33:44.521 [2024-11-26 18:35:18.759384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:33:44.521 [2024-11-26 18:35:18.759394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:33:44.521 [2024-11-26 18:35:18.759405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:44.521 [2024-11-26 18:35:18.759416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:44.521 [2024-11-26 18:35:18.759426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:44.521 [2024-11-26 18:35:18.759436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:44.521 [2024-11-26 18:35:18.759446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:44.521 [2024-11-26 18:35:18.759456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:44.521 [2024-11-26 18:35:18.759467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:44.521 [2024-11-26 18:35:18.759477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:44.521 [2024-11-26 18:35:18.759487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:44.521 [2024-11-26 18:35:18.759497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:44.521 [2024-11-26 18:35:18.759507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:44.521 [2024-11-26 18:35:18.759517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:44.521 [2024-11-26 18:35:18.759527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:44.521 [2024-11-26 18:35:18.759538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:44.521 [2024-11-26 18:35:18.759548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:44.521 [2024-11-26 18:35:18.759598] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:33:44.521 [2024-11-26 18:35:18.759610] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: cddebbe7-5cfa-4147-bb3d-f8964dffa03a 00:33:44.521 [2024-11-26 18:35:18.759621] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:33:44.521 [2024-11-26 18:35:18.759631] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:33:44.521 [2024-11-26 18:35:18.759641] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:33:44.521 [2024-11-26 18:35:18.759652] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:33:44.521 [2024-11-26 18:35:18.759663] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:33:44.521 [2024-11-26 18:35:18.759689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:33:44.521 [2024-11-26 18:35:18.759708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:33:44.521 [2024-11-26 18:35:18.759718] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:33:44.521 [2024-11-26 18:35:18.759727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:33:44.521 [2024-11-26 18:35:18.759738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.521 [2024-11-26 18:35:18.759750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:33:44.521 [2024-11-26 18:35:18.759762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.391 ms 00:33:44.521 [2024-11-26 18:35:18.759773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.521 [2024-11-26 18:35:18.776923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.521 [2024-11-26 18:35:18.776959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:33:44.521 [2024-11-26 18:35:18.776975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.126 ms 00:33:44.521 [2024-11-26 18:35:18.776985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.521 [2024-11-26 18:35:18.777476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:33:44.521 [2024-11-26 18:35:18.777494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:33:44.521 [2024-11-26 18:35:18.777523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.457 ms 00:33:44.521 [2024-11-26 18:35:18.777534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.521 [2024-11-26 18:35:18.826121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.521 [2024-11-26 18:35:18.826176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:33:44.521 [2024-11-26 18:35:18.826192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.521 [2024-11-26 18:35:18.826210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.521 [2024-11-26 18:35:18.826251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.521 [2024-11-26 18:35:18.826265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:33:44.521 [2024-11-26 18:35:18.826276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.521 [2024-11-26 18:35:18.826285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.521 [2024-11-26 18:35:18.826410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.521 [2024-11-26 18:35:18.826430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:33:44.521 [2024-11-26 18:35:18.826441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.521 [2024-11-26 18:35:18.826452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.521 [2024-11-26 18:35:18.826484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.521 [2024-11-26 18:35:18.826498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:33:44.521 [2024-11-26 18:35:18.826509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.521 [2024-11-26 18:35:18.826519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.521 [2024-11-26 18:35:18.913707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.521 [2024-11-26 18:35:18.913783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:33:44.521 [2024-11-26 18:35:18.913816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.521 [2024-11-26 18:35:18.913828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.780 [2024-11-26 18:35:18.983983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.780 [2024-11-26 18:35:18.984051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:33:44.780 [2024-11-26 18:35:18.984085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.781 [2024-11-26 18:35:18.984096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.781 [2024-11-26 18:35:18.984226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.781 [2024-11-26 18:35:18.984243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:33:44.781 [2024-11-26 18:35:18.984255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.781 [2024-11-26 18:35:18.984266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.781 [2024-11-26 18:35:18.984339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.781 [2024-11-26 18:35:18.984375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:33:44.781 [2024-11-26 18:35:18.984388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.781 [2024-11-26 18:35:18.984398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.781 [2024-11-26 18:35:18.984525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.781 [2024-11-26 18:35:18.984544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:33:44.781 [2024-11-26 18:35:18.984556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.781 [2024-11-26 18:35:18.984566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.781 [2024-11-26 18:35:18.984636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.781 [2024-11-26 18:35:18.984656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:33:44.781 [2024-11-26 18:35:18.984675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.781 [2024-11-26 18:35:18.984686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.781 [2024-11-26 18:35:18.984733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.781 [2024-11-26 18:35:18.984763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:33:44.781 [2024-11-26 18:35:18.984792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.781 [2024-11-26 18:35:18.984808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.781 [2024-11-26 18:35:18.984887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:33:44.781 [2024-11-26 18:35:18.984924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:33:44.781 [2024-11-26 18:35:18.984938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:33:44.781 [2024-11-26 18:35:18.984954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:33:44.781 [2024-11-26 18:35:18.985180] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 313.030 ms, result 0 00:33:46.684 18:35:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:33:46.684 18:35:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:33:46.684 18:35:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:33:46.684 18:35:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:33:46.684 18:35:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:33:46.684 18:35:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:33:46.684 Remove shared memory files 00:33:46.684 18:35:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:33:46.684 18:35:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:46.684 18:35:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:33:46.684 18:35:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:33:46.684 18:35:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84504 00:33:46.684 18:35:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:46.684 18:35:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:33:46.684 ************************************ 00:33:46.684 END TEST ftl_upgrade_shutdown 00:33:46.684 ************************************ 00:33:46.684 00:33:46.684 real 1m28.948s 00:33:46.684 user 2m1.097s 00:33:46.684 sys 0m26.579s 00:33:46.684 18:35:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:46.684 18:35:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:33:46.684 18:35:20 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:33:46.684 18:35:20 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:33:46.684 18:35:20 ftl -- ftl/ftl.sh@14 -- # killprocess 76842 00:33:46.684 18:35:20 ftl -- common/autotest_common.sh@954 -- # '[' -z 76842 ']' 00:33:46.684 18:35:20 ftl -- common/autotest_common.sh@958 -- # kill -0 76842 00:33:46.684 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76842) - No such process 00:33:46.684 Process with pid 76842 is not found 00:33:46.684 18:35:20 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76842 is not found' 00:33:46.684 18:35:20 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:33:46.684 18:35:20 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84957 00:33:46.684 18:35:20 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84957 00:33:46.684 18:35:20 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:46.684 18:35:20 ftl -- common/autotest_common.sh@835 -- # '[' -z 84957 ']' 00:33:46.684 18:35:20 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.684 18:35:20 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:46.685 18:35:20 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.685 18:35:20 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:46.685 18:35:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:46.685 [2024-11-26 18:35:20.833892] Starting SPDK v25.01-pre git sha1 51a65534e / DPDK 24.03.0 initialization... 00:33:46.685 [2024-11-26 18:35:20.834334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84957 ] 00:33:46.685 [2024-11-26 18:35:21.017874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:46.685 [2024-11-26 18:35:21.125670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.620 18:35:21 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:47.620 18:35:21 ftl -- common/autotest_common.sh@868 -- # return 0 00:33:47.620 18:35:21 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:33:47.879 nvme0n1 00:33:47.879 18:35:22 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:33:47.879 18:35:22 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:47.879 18:35:22 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:33:48.138 18:35:22 ftl -- ftl/common.sh@28 -- # stores=549c2704-0645-4f57-b09c-58eff33870ea 00:33:48.138 18:35:22 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:33:48.138 18:35:22 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 549c2704-0645-4f57-b09c-58eff33870ea 00:33:48.396 18:35:22 ftl -- ftl/ftl.sh@23 -- # killprocess 84957 00:33:48.396 18:35:22 ftl -- common/autotest_common.sh@954 -- # '[' -z 84957 ']' 00:33:48.396 18:35:22 ftl -- common/autotest_common.sh@958 -- # kill -0 84957 00:33:48.396 18:35:22 ftl -- common/autotest_common.sh@959 -- # uname 00:33:48.396 18:35:22 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:48.396 18:35:22 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84957 00:33:48.396 killing process with pid 84957 00:33:48.396 18:35:22 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:48.396 18:35:22 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:48.396 18:35:22 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84957' 00:33:48.396 18:35:22 ftl -- common/autotest_common.sh@973 -- # kill 84957 00:33:48.396 18:35:22 ftl -- common/autotest_common.sh@978 -- # wait 84957 00:33:50.300 18:35:24 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:50.560 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:50.560 Waiting for block devices as requested 00:33:50.560 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:50.560 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:50.818 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:33:50.818 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:33:56.103 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:33:56.103 Remove shared memory files 00:33:56.103 18:35:30 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:33:56.103 18:35:30 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:33:56.103 18:35:30 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:33:56.103 18:35:30 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:33:56.103 18:35:30 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:33:56.103 18:35:30 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:33:56.103 18:35:30 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:33:56.103 ************************************ 00:33:56.103 END TEST ftl 00:33:56.103 ************************************ 00:33:56.103 00:33:56.103 real 12m25.084s 00:33:56.103 user 15m29.112s 00:33:56.103 sys 1m35.387s 00:33:56.103 18:35:30 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:56.103 18:35:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:33:56.103 18:35:30 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:33:56.103 18:35:30 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:56.103 18:35:30 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:33:56.103 18:35:30 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:33:56.103 18:35:30 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:33:56.103 18:35:30 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:33:56.103 18:35:30 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:33:56.103 18:35:30 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:33:56.103 18:35:30 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:33:56.103 18:35:30 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:33:56.103 18:35:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:56.103 18:35:30 -- common/autotest_common.sh@10 -- # set +x 00:33:56.103 18:35:30 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:33:56.103 18:35:30 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:33:56.103 18:35:30 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:33:56.103 18:35:30 -- common/autotest_common.sh@10 -- # set +x 00:33:58.006 INFO: APP EXITING 00:33:58.006 INFO: killing all VMs 00:33:58.006 INFO: killing vhost app 00:33:58.006 INFO: EXIT DONE 00:33:58.006 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:58.571 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:58.571 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:58.571 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:33:58.571 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:33:58.828 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:59.394 Cleaning 00:33:59.394 Removing: /var/run/dpdk/spdk0/config 00:33:59.394 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:59.394 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:59.394 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:59.394 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:59.394 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:59.394 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:59.394 Removing: /var/run/dpdk/spdk0 00:33:59.394 Removing: /var/run/dpdk/spdk_pid57704 00:33:59.394 Removing: /var/run/dpdk/spdk_pid57944 00:33:59.394 Removing: /var/run/dpdk/spdk_pid58173 00:33:59.394 Removing: /var/run/dpdk/spdk_pid58277 00:33:59.394 Removing: /var/run/dpdk/spdk_pid58333 00:33:59.394 Removing: /var/run/dpdk/spdk_pid58465 00:33:59.394 Removing: /var/run/dpdk/spdk_pid58490 00:33:59.394 Removing: /var/run/dpdk/spdk_pid58700 00:33:59.394 Removing: /var/run/dpdk/spdk_pid58806 00:33:59.394 Removing: /var/run/dpdk/spdk_pid58913 00:33:59.394 Removing: /var/run/dpdk/spdk_pid59041 00:33:59.394 Removing: /var/run/dpdk/spdk_pid59149 00:33:59.394 Removing: /var/run/dpdk/spdk_pid59188 00:33:59.394 Removing: /var/run/dpdk/spdk_pid59230 00:33:59.394 Removing: /var/run/dpdk/spdk_pid59301 00:33:59.394 Removing: /var/run/dpdk/spdk_pid59396 00:33:59.394 Removing: /var/run/dpdk/spdk_pid59878 00:33:59.394 Removing: /var/run/dpdk/spdk_pid59953 00:33:59.394 Removing: /var/run/dpdk/spdk_pid60029 00:33:59.394 Removing: /var/run/dpdk/spdk_pid60055 00:33:59.394 Removing: /var/run/dpdk/spdk_pid60205 00:33:59.394 Removing: /var/run/dpdk/spdk_pid60227 00:33:59.394 Removing: /var/run/dpdk/spdk_pid60375 00:33:59.394 Removing: /var/run/dpdk/spdk_pid60402 00:33:59.394 Removing: /var/run/dpdk/spdk_pid60466 00:33:59.394 Removing: /var/run/dpdk/spdk_pid60489 00:33:59.394 Removing: /var/run/dpdk/spdk_pid60559 00:33:59.394 Removing: /var/run/dpdk/spdk_pid60577 00:33:59.394 Removing: /var/run/dpdk/spdk_pid60772 00:33:59.394 Removing: /var/run/dpdk/spdk_pid60814 00:33:59.394 Removing: /var/run/dpdk/spdk_pid60898 00:33:59.394 Removing: /var/run/dpdk/spdk_pid61088 00:33:59.394 Removing: /var/run/dpdk/spdk_pid61189 00:33:59.394 Removing: /var/run/dpdk/spdk_pid61231 00:33:59.394 Removing: /var/run/dpdk/spdk_pid61707 00:33:59.394 Removing: /var/run/dpdk/spdk_pid61805 00:33:59.394 Removing: /var/run/dpdk/spdk_pid61925 00:33:59.394 Removing: /var/run/dpdk/spdk_pid61984 00:33:59.394 Removing: /var/run/dpdk/spdk_pid62012 00:33:59.394 Removing: /var/run/dpdk/spdk_pid62094 00:33:59.394 Removing: /var/run/dpdk/spdk_pid62730 00:33:59.394 Removing: /var/run/dpdk/spdk_pid62772 00:33:59.394 Removing: /var/run/dpdk/spdk_pid63293 00:33:59.394 Removing: /var/run/dpdk/spdk_pid63402 00:33:59.394 Removing: /var/run/dpdk/spdk_pid63518 00:33:59.394 Removing: /var/run/dpdk/spdk_pid63571 00:33:59.394 Removing: /var/run/dpdk/spdk_pid63602 00:33:59.394 Removing: /var/run/dpdk/spdk_pid63627 00:33:59.394 Removing: /var/run/dpdk/spdk_pid65525 00:33:59.394 Removing: /var/run/dpdk/spdk_pid65674 00:33:59.394 Removing: /var/run/dpdk/spdk_pid65678 00:33:59.394 Removing: /var/run/dpdk/spdk_pid65690 00:33:59.394 Removing: /var/run/dpdk/spdk_pid65744 00:33:59.394 Removing: /var/run/dpdk/spdk_pid65748 00:33:59.394 Removing: /var/run/dpdk/spdk_pid65760 00:33:59.394 Removing: /var/run/dpdk/spdk_pid65809 00:33:59.394 Removing: /var/run/dpdk/spdk_pid65814 00:33:59.394 Removing: /var/run/dpdk/spdk_pid65826 00:33:59.394 Removing: /var/run/dpdk/spdk_pid65872 00:33:59.394 Removing: /var/run/dpdk/spdk_pid65876 00:33:59.394 Removing: /var/run/dpdk/spdk_pid65888 00:33:59.394 Removing: /var/run/dpdk/spdk_pid67293 00:33:59.394 Removing: /var/run/dpdk/spdk_pid67405 00:33:59.394 Removing: /var/run/dpdk/spdk_pid68826 00:33:59.394 Removing: /var/run/dpdk/spdk_pid70578 00:33:59.394 Removing: /var/run/dpdk/spdk_pid70663 00:33:59.394 Removing: /var/run/dpdk/spdk_pid70738 00:33:59.394 Removing: /var/run/dpdk/spdk_pid70848 00:33:59.394 Removing: /var/run/dpdk/spdk_pid70952 00:33:59.394 Removing: /var/run/dpdk/spdk_pid71050 00:33:59.394 Removing: /var/run/dpdk/spdk_pid71135 00:33:59.394 Removing: /var/run/dpdk/spdk_pid71210 00:33:59.394 Removing: /var/run/dpdk/spdk_pid71322 00:33:59.394 Removing: /var/run/dpdk/spdk_pid71420 00:33:59.394 Removing: /var/run/dpdk/spdk_pid71520 00:33:59.394 Removing: /var/run/dpdk/spdk_pid71601 00:33:59.394 Removing: /var/run/dpdk/spdk_pid71682 00:33:59.394 Removing: /var/run/dpdk/spdk_pid71787 00:33:59.394 Removing: /var/run/dpdk/spdk_pid71890 00:33:59.394 Removing: /var/run/dpdk/spdk_pid71986 00:33:59.394 Removing: /var/run/dpdk/spdk_pid72066 00:33:59.394 Removing: /var/run/dpdk/spdk_pid72142 00:33:59.394 Removing: /var/run/dpdk/spdk_pid72252 00:33:59.394 Removing: /var/run/dpdk/spdk_pid72344 00:33:59.394 Removing: /var/run/dpdk/spdk_pid72440 00:33:59.395 Removing: /var/run/dpdk/spdk_pid72524 00:33:59.395 Removing: /var/run/dpdk/spdk_pid72595 00:33:59.653 Removing: /var/run/dpdk/spdk_pid72681 00:33:59.653 Removing: /var/run/dpdk/spdk_pid72758 00:33:59.653 Removing: /var/run/dpdk/spdk_pid72867 00:33:59.653 Removing: /var/run/dpdk/spdk_pid72959 00:33:59.653 Removing: /var/run/dpdk/spdk_pid73060 00:33:59.653 Removing: /var/run/dpdk/spdk_pid73144 00:33:59.653 Removing: /var/run/dpdk/spdk_pid73224 00:33:59.653 Removing: /var/run/dpdk/spdk_pid73294 00:33:59.653 Removing: /var/run/dpdk/spdk_pid73374 00:33:59.653 Removing: /var/run/dpdk/spdk_pid73483 00:33:59.653 Removing: /var/run/dpdk/spdk_pid73574 00:33:59.653 Removing: /var/run/dpdk/spdk_pid73730 00:33:59.653 Removing: /var/run/dpdk/spdk_pid74020 00:33:59.653 Removing: /var/run/dpdk/spdk_pid74055 00:33:59.653 Removing: /var/run/dpdk/spdk_pid74542 00:33:59.653 Removing: /var/run/dpdk/spdk_pid74725 00:33:59.653 Removing: /var/run/dpdk/spdk_pid74827 00:33:59.653 Removing: /var/run/dpdk/spdk_pid74943 00:33:59.653 Removing: /var/run/dpdk/spdk_pid75004 00:33:59.653 Removing: /var/run/dpdk/spdk_pid75024 00:33:59.653 Removing: /var/run/dpdk/spdk_pid75318 00:33:59.653 Removing: /var/run/dpdk/spdk_pid75384 00:33:59.653 Removing: /var/run/dpdk/spdk_pid75479 00:33:59.653 Removing: /var/run/dpdk/spdk_pid75904 00:33:59.653 Removing: /var/run/dpdk/spdk_pid76051 00:33:59.653 Removing: /var/run/dpdk/spdk_pid76842 00:33:59.653 Removing: /var/run/dpdk/spdk_pid76991 00:33:59.653 Removing: /var/run/dpdk/spdk_pid77194 00:33:59.653 Removing: /var/run/dpdk/spdk_pid77302 00:33:59.653 Removing: /var/run/dpdk/spdk_pid77707 00:33:59.653 Removing: /var/run/dpdk/spdk_pid77992 00:33:59.653 Removing: /var/run/dpdk/spdk_pid78353 00:33:59.654 Removing: /var/run/dpdk/spdk_pid78563 00:33:59.654 Removing: /var/run/dpdk/spdk_pid78710 00:33:59.654 Removing: /var/run/dpdk/spdk_pid78774 00:33:59.654 Removing: /var/run/dpdk/spdk_pid78929 00:33:59.654 Removing: /var/run/dpdk/spdk_pid78964 00:33:59.654 Removing: /var/run/dpdk/spdk_pid79024 00:33:59.654 Removing: /var/run/dpdk/spdk_pid79248 00:33:59.654 Removing: /var/run/dpdk/spdk_pid79485 00:33:59.654 Removing: /var/run/dpdk/spdk_pid79949 00:33:59.654 Removing: /var/run/dpdk/spdk_pid80445 00:33:59.654 Removing: /var/run/dpdk/spdk_pid80922 00:33:59.654 Removing: /var/run/dpdk/spdk_pid81488 00:33:59.654 Removing: /var/run/dpdk/spdk_pid81636 00:33:59.654 Removing: /var/run/dpdk/spdk_pid81729 00:33:59.654 Removing: /var/run/dpdk/spdk_pid82417 00:33:59.654 Removing: /var/run/dpdk/spdk_pid82488 00:33:59.654 Removing: /var/run/dpdk/spdk_pid82961 00:33:59.654 Removing: /var/run/dpdk/spdk_pid83378 00:33:59.654 Removing: /var/run/dpdk/spdk_pid83935 00:33:59.654 Removing: /var/run/dpdk/spdk_pid84059 00:33:59.654 Removing: /var/run/dpdk/spdk_pid84106 00:33:59.654 Removing: /var/run/dpdk/spdk_pid84167 00:33:59.654 Removing: /var/run/dpdk/spdk_pid84230 00:33:59.654 Removing: /var/run/dpdk/spdk_pid84305 00:33:59.654 Removing: /var/run/dpdk/spdk_pid84504 00:33:59.654 Removing: /var/run/dpdk/spdk_pid84591 00:33:59.654 Removing: /var/run/dpdk/spdk_pid84654 00:33:59.654 Removing: /var/run/dpdk/spdk_pid84733 00:33:59.654 Removing: /var/run/dpdk/spdk_pid84768 00:33:59.654 Removing: /var/run/dpdk/spdk_pid84832 00:33:59.654 Removing: /var/run/dpdk/spdk_pid84957 00:33:59.654 Clean 00:33:59.654 18:35:34 -- common/autotest_common.sh@1453 -- # return 0 00:33:59.654 18:35:34 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:33:59.654 18:35:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:59.654 18:35:34 -- common/autotest_common.sh@10 -- # set +x 00:33:59.912 18:35:34 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:33:59.912 18:35:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:59.912 18:35:34 -- common/autotest_common.sh@10 -- # set +x 00:33:59.912 18:35:34 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:59.912 18:35:34 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:59.912 18:35:34 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:59.912 18:35:34 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:33:59.912 18:35:34 -- spdk/autotest.sh@398 -- # hostname 00:33:59.912 18:35:34 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:34:00.171 geninfo: WARNING: invalid characters removed from testname! 00:34:22.098 18:35:56 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:25.419 18:35:59 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:28.703 18:36:02 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:30.606 18:36:04 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:33.138 18:36:07 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:36.429 18:36:10 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:38.334 18:36:12 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:38.334 18:36:12 -- spdk/autorun.sh@1 -- $ timing_finish 00:34:38.334 18:36:12 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:34:38.334 18:36:12 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:38.334 18:36:12 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:34:38.334 18:36:12 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:38.334 + [[ -n 5299 ]] 00:34:38.334 + sudo kill 5299 00:34:38.344 [Pipeline] } 00:34:38.360 [Pipeline] // timeout 00:34:38.366 [Pipeline] } 00:34:38.381 [Pipeline] // stage 00:34:38.386 [Pipeline] } 00:34:38.401 [Pipeline] // catchError 00:34:38.412 [Pipeline] stage 00:34:38.414 [Pipeline] { (Stop VM) 00:34:38.429 [Pipeline] sh 00:34:38.710 + vagrant halt 00:34:42.004 ==> default: Halting domain... 00:34:48.583 [Pipeline] sh 00:34:48.863 + vagrant destroy -f 00:34:52.172 ==> default: Removing domain... 00:34:52.476 [Pipeline] sh 00:34:52.757 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:34:52.766 [Pipeline] } 00:34:52.782 [Pipeline] // stage 00:34:52.788 [Pipeline] } 00:34:52.803 [Pipeline] // dir 00:34:52.808 [Pipeline] } 00:34:52.823 [Pipeline] // wrap 00:34:52.829 [Pipeline] } 00:34:52.842 [Pipeline] // catchError 00:34:52.852 [Pipeline] stage 00:34:52.854 [Pipeline] { (Epilogue) 00:34:52.866 [Pipeline] sh 00:34:53.148 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:59.729 [Pipeline] catchError 00:34:59.731 [Pipeline] { 00:34:59.744 [Pipeline] sh 00:35:00.026 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:35:00.027 Artifacts sizes are good 00:35:00.035 [Pipeline] } 00:35:00.049 [Pipeline] // catchError 00:35:00.061 [Pipeline] archiveArtifacts 00:35:00.068 Archiving artifacts 00:35:00.176 [Pipeline] cleanWs 00:35:00.185 [WS-CLEANUP] Deleting project workspace... 00:35:00.185 [WS-CLEANUP] Deferred wipeout is used... 00:35:00.191 [WS-CLEANUP] done 00:35:00.192 [Pipeline] } 00:35:00.208 [Pipeline] // stage 00:35:00.212 [Pipeline] } 00:35:00.226 [Pipeline] // node 00:35:00.231 [Pipeline] End of Pipeline 00:35:00.265 Finished: SUCCESS